> It's frustrating how much potential that platform has for this kind of thing (given the way the GPU shares memory with the CPU) that isn't yet harnessed because most of the ecosystem is built around NVIDIA and CUDA.
I'm sure it's frustrating from a consumer perspective, but it should be no surprise why Nvidia won here. CUDA shipped unified memory addressing ten years before the M1 hit shelves. On top of that, their architecture and OS support is top-notch, you can ship your CUDA code on anything from a $250 Jetson to a $300,000 DGX system, and their hardware is relatively ubiquitous.
The frustrating thing is how companies like Apple and Nvidia insist on being each other's enemies. Only consumers feel the pain when researchers discover cool stuff like this and want to share.
You can add an Nvidia card to basically every kind of hardware, desktop, server or most importantly rent in the cloud. You must buy a Mac to use a M1/M2. Given the cost of some cards it could make sense but then everybody using that software will have to buy a Mac too.
A lot of people have M1s or M2s already, who who would have to pay for access to an NVidia card. I think that's the disconnect. It's more about making use of what a lot of people (and let's not forget a lot of developers) already have.
I've personally got an 8GB M1 Macbook as my work development machine, and while I'm having a lot of fun with llama.cpp it does feel somewhat disconnected from the bulk of the ML ecosystem.
I'm sure it's frustrating from a consumer perspective, but it should be no surprise why Nvidia won here. CUDA shipped unified memory addressing ten years before the M1 hit shelves. On top of that, their architecture and OS support is top-notch, you can ship your CUDA code on anything from a $250 Jetson to a $300,000 DGX system, and their hardware is relatively ubiquitous.
The frustrating thing is how companies like Apple and Nvidia insist on being each other's enemies. Only consumers feel the pain when researchers discover cool stuff like this and want to share.