Hacker News new | past | comments | ask | show | jobs | submit login

This breaks my brain, because I know Google trains it models on TPUs and they're seen as faster, and if they're better at inference, and can train, then why is Nvidia in a unique position? My understanding was always it's as simple as it required esoteric tooling



Multiple types of TPUs.

(I work for Google, but the above is public information.)


Because people generally don’t use TPUs outside of Google. The tooling is different, the access is metered through GCP, etc.

Nvidia is in a vaguely unique position in that their products have great tooling support and few companies sell silicon at their scale.


Correct, I'm pointing out politely that's in conflict with the person I'm replying to.


Possibly naive but I very much view CUDA and its integration into ML frameworks being nvidias moat




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: