Lots of companies are doing ASICs for machine learning. Off the top of my head, Graphcore, Cerebras, Tenstorrent, Wave. This site claims there are 187 of them (which seems unlikely) https://tracxn.com/d/trending-themes/Startups-in-AI-Processo.... Google's TPU counts and there are periodic rumours about amazon and meta building their own (might be reality now, I haven't been watching closely).
As far as I can tell that gamble isn't work out particularly well for any of the startups but that might be money drying up before they've hit commercial viability. I know the hardware is pretty good for Graphcore, Cerebras and the software proving difficult.
A lot of companies are indeed trying to build AI accelerator cards, but I would not necessarily call them ASICs in the narrow sense of the word, they are by necessity always quite programmable and flexible: NN workloads characteristics change much much faster than you can design and manufacture chips.
I would say they are more like GPUs or DSPs: programmable but optimised for a specific application domain, ML/AI workloads in this case. Sometimes people call this ASIPs: application specific instruction set processors. While maybe not a very commonly used term, it is technically more correct.
I have experience with companies doing their own chips. Often as not what seems like a good idea turns out not to be because your volume is low and your ability to get to high yield dominates and that both takes years and talent.
As a rule companies should only do their own chips if they are certain they can solve and overcome the cogs problems that low yield and low volume penalties entail. If not you are almost certainly better off just eating the vendor margin. It is very very unlikely that you will do better.
As far as I can tell that gamble isn't work out particularly well for any of the startups but that might be money drying up before they've hit commercial viability. I know the hardware is pretty good for Graphcore, Cerebras and the software proving difficult.