Hacker News new | past | comments | ask | show | jobs | submit login

My guess is that they would be fine with continuing to serve all models, but that hardware constraints are forcing difficult decisions. SA has already said that hardware is holding them back from what they want to do. I was on a waiting list for the GPT4 API for like a few months, which I guess is because they couldn't keep up with demand.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: