OpenCL isn't dead, if you write your code from scratch you can use it just fine and match CUDA performance. In my experience, OpenCL has two basic issues.
The first is the ecosystem. Nvidia went to great lengths to provide well optimized libraries built on top of CUDA that supply things people care about - deep learning stuff, dense as well as sparse linear algebra, etc. There's nothing meaningfully competitive on the OpenCL side of things.
The second is user friendliness of the API and the implementations. OpenCL is basically analogous to OpenGL in terms of design, it's a verbose annoying C API with huge amounts of trivial boilerplate. By contrast, CUDA supports most of the C++ convenience features relevant in this problem space, has decent tools, IDE and debugger integration, etc.
Neither of these issues is necessary a dealbreaker if you're willing to invest the effort, but choosing OpenCL over CUDA requires prioritizing portability over user friendliness, available libraries and tooling. As a consequence, not many people choose OpenCL and the dominance of CUDA continues to grow. Unfortunately, I don't see that changing in the near future.
When I looked at OpenCl, my impression was it was simply a kind of bundle of functionality that happened to be shared by different GPUs (thin-layer of syntax on top of each manufacturers chip) whereas Cuda is a library that actually shields the programmer from the complexity of programming a GPU. I think AMD is working on an open-source system somewhat equivalent to Cuda which might be nice to see develop. But it seems like the OpenCl consortium is the kind of organization that could never care about the individual developer or meet their needs. I'd love if someone could prove me wrong.
Almost as dead as its original creator, Steve Jobs, who only summoned it into existence because Jensen Huang leaked a deal between AAPL and NVDA ahead of him.
OpenCL could have been killer on mobile, and it could have delivered low-power machine learning and computation there all the way back in 2012, but both AAPL and GOOG went out of their way to cripple its use despite many of the mobile GPUs having hardware support for it. We all lost there IMO.