Hacker News new | past | comments | ask | show | jobs | submit login

$30 / million tokens to $5 / million tokens since GPT-4 original release = 6X improvement

4000 token context to 128k token context = 32X improvement

5.4 second voice mode latency to 320 milliseconds = 16X improvement.

I guess I got a bit excited by including cost but that's close enough to an order of magnitude for me. That's ignoring the fact that's it's now literally free in chatGPT.




Thanks so much for posting this. The increased token length alone (obviously not just with OpenAI's models but the other big ones as well) has opened up a huge number of new use cases that I've seen tons of people and other startups pounce on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: