Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Unofficial OpenAI Status Dashboard (llm-utils.org)
27 points by tikkun on July 13, 2023 | hide | past | favorite | 11 comments
Whenever I get abnormally slow results or high rates of errors from GPT-4, I used to go and check the official OpenAI status site. But mostly it would always show green when I was experiencing issues.

So Eliot who I work with built this little utility - an unofficial OpenAI status dashboard.

It's designed to answer the question of "is GPT slow for everyone right now or just for me" and "is GPT having elevated rates of errors for everyone right now or just for me"

It's aimed at API users but may also be useful for ChatGPT errors/slowness, I'm not sure.

Possible features we're thinking about:

* Gauge style chart to be easier to read

* Show the average and delta vs average in the chart

* Track a specific prompt over time to track any model performance degradation

* Add stats specifically for ChatGPT (any ideas on how to do so, given that it's behind Cloudflare?)

* Add speed comparison of APIs via Azure vs OpenAI

Let us know what's helpful about it and what you'd hope to see improved or added.




Great work! I've been thinking how helpful it would be if you could provide an API to access those results. That way, in our application, we can easily determine the fastest model based on their speed.

I operate a SaaS product that utilizes generative AI. Many users have been expressing concerns about the response speed. Interestingly, it seems that the speed is slower during the day and faster at night. Once again, this is fantastic information!


Thanks for the compliments!

For api / switching based on speed: do you mean specifically between the identical or nearly models like gpt-4 and gpt-4-0613, or do you mean across non identical models too?


Pretty cool stuff. Way more concrete that the actual site when you measure a proper metric like time to completion.

The charts look too dense and jagged on mobile though.

Also this could be pretty smart to have if the speed is more important than the model. Then you could automatically select the fastest model from a prioritized list of models, in case the better ones become slow.


Mobile is now somewhat improved with scrolling charts

Explain the selecting fastest models - do you mean specifically between the identical or nearly models like gpt-4 and gpt-4-0613, or do you mean across non identical models too?


Can I get a textual representation of this in a format that would be easy to digest by my GPT-4 backed Customer Success Manager bot on https://olympia.chat


Wouldn't it defeat the purpose if your chat bot itself is running the service that is slow?

Either way it would probably be easier for you to create the textual representation yourself, since it would require plenty of tweaking in order to get the result you want.

Maybe you could create pairs of standard deviations of the current speed and text, like this:

-2: very slow, -1: slow, 0: working fine, 1: fast, 2: very fast

Then plug the rounded stddev of the current speed into the dictionary and insert it into the sentence: "The chat bot is X right now"


This is a neat idea. A service that did this for a bunch of major services to give a bullshit and politics free indication of service level would be useful.


Can you explain where you get this data from? You have it scheduled to poll every hour? How can you be sure the data is statistically significant? etc


Polling every ten minutes, I don't think the data is statistically significant, but it is still helpful for me as a user


To be good I hope you poll from different geolocations.


Only USA west




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: