Hacker News new | past | comments | ask | show | jobs | submit login

This does have to be managed, empty Django is ~3.5s ours is <5s.

Heavy/slow libraries aren't loaded in the web via settings file, but are loaded in background workers. We've found that image processing libraries and PDF handlers are slowest to load.

We use 3 minute crons to keep a set of concurrent workers alive (pre-warming as you describe). The number of requests that hit full cold start are <0.5%.

But yeah, not all frameworks can be cold-started in Lambda well. Rails can work. We built our Rails app before lambda could take containers and the coupling was too tight w/ slow libraries to move it into Lambda w/ any reasonable start time, like 20-30s. Just too expensive to decouple and get it into lambda, wasn't worth whatever CPU savings we would gain.




> We use 3 minute crons to keep a set of concurrent workers alive (pre-warming as you describe). The number of requests that hit full cold start are <0.5%.

Does this mean you have a cron job just pinging the serverless function every 3 minutes? I'm curious how much this adds on to your costs. It means that the whole "don't pay for non-usage" thing is not quite true, but maybe it's still significantly cheaper than running an EC2 instance or whatnot. I'm curious about the cost calculation here.

Another thing I'm curious about, since you have a container-based deployment, did you compare with Fargate? It's another "serverless" solution I've been looking at lately and trying to compare with the Lambda approach. As far as I can tell the downside is that it's hard to scale down to zero like with Lambda, but the idea is that it supports long-running tasks instead of having to set up complicated Rube Goldberg machines with Lambdas. Unfortunately I was a bit disappointed to discover that it doesn't support GPU.


> Does this mean you have a cron job just pinging the serverless function every 3 minutes? I'm curious how much this adds on to your costs. It means that the whole "don't pay for non-usage" thing is not quite true, but maybe it's still significantly cheaper than running an EC2 instance or whatnot. I'm curious about the cost calculation here.

Yes, specifically it kicks off a Lambda function that does a parallel GET to our website at a special endpoint that has a 100ms "wait" and basic DB call. This keeps the lambda process alive/in-memory.

To keep a function alive costs ~125ms (100ms wait + 25ms full func roundtrip). every 3 minutes. ~0.041% of 1x CPU time. Our website server costs are tiny and lower for Staging and UAT. Benefit - can scale to 1000x (AWS Limit) servers at the speed of your cold start time.

But if you have a heavily used website, Lambda is not cost effective at all.

> Another thing I'm curious about, since you have a container-based deployment, did you compare with Fargate?

Yes we use Fargate for our core product which is built in Rails before containers could be deployed in Lambda. Rails works fine on Lambda[0] but the transition cost wasn't worth it for us. Fargate is great, but as you point out it is expensive if your application isn't a user heavy one like ours. To be highly available, we always have a minimum of 2 online but we're a b2b application so our night usage, 10pm-6am is zero. But I have 2 machines just sitting there. This is why i love Lambda >> Fargate.

Also, scalaing Fargate machines is slow if you get a traffic spike.

[0] https://github.com/rails-lambda/lamby




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: