Hacker News new | past | comments | ask | show | jobs | submit login

Check this benchmark:

https://serverless-benchmark.com/

It displays data from the last 3 days of tests. Google Cloud has a max peak of 60 seconds for cold starts. It's the second worst after Azure.




Wow that is pretty bleak actually. 500ms+ in a world where edge cached serves at 5ms def means one needs to deploy these strategically only


Thats an amazing benchmark. Though you have to fiddle with the concurrency parameter to see that some of the slowness is caused by scaling.


OTOH cold starts are calculated with a concurrency of only 10.

The point of using serverless is handling massive traffic spikes efficiently and cheaply but 10 concurrent connections doesn't seem very massive.

Cloudflare Workers are doing much better than the rest in this respect (see the max value, the graph is misleading) but these have serious limitations (eg: max 50ms of CPU time). This makes them a bad fit for most situations.

In my current project which requires lowest possible latency I'm having more success with Fly.io. Instead of having cloud functions you create docker images which are distributed on their regions and scale up/down based on demand on that region.

https://fly.io/


That's like cloud run which I also prefer. I'll try fly it looks like an even better fit




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: