Hacker News new | past | comments | ask | show | jobs | submit login

Looks like they lowered quantization a bit too much. This sometimes happens with my 7B models. Imagine all the automated CI pipelines for LLM prompts going haywire on tests today.



Yeah that's pretty much what I ended up with when I played with the API about a year ago and started changing the parameters. Everything would ultimately turn into more and more confusing English incantation, ultimately not even proper words anymore.


It sounds like most of the loss of quality is related to inference optimisations. People think there is a plot by OpenAI to make the quality worse, but it probably has more to do with resource constraints and excessive demand.


I think the issue was exclusive to ChatGPT (a web frontend for their models), issues with ChatGPT don't usually affect the API.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: