Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All of things you are complaining about are well known facts that are clearly stated in the documentation.

I don't care about what is the equivalent computing power in 90s desktop measurement because you cannot replace a lambda function with a 90s desktop, so it is pointless.

The right approach is: I have a problem A that I can implement using AWS Lambda, AWS EC2 or your favourite DHH approved stack, how much of these cost compare to each other.



Can you point me to where this is clearly stated in the documentation ? I only found one reference as a passing note when I went searching for it. This would be a value displayed in the pricing calculator with a link to explanation if they were being honest.

90s CPU comparison is just to demonstrate how out of place it is with what people are used to even on lowest tier hosts with shared CPU cores. Low ram compute seems to be artificially limited to make low ram lambdas useful in very narrow use cases.

For reference I have a devops team in-company that deployed and maintained several AWS projects, including some serverless, even they were surprised at the low compute available at low RAM lambdas.


Memory and computing power

Memory is the principal lever available to Lambda developers for controlling the performance of a function. You can configure the amount of memory allocated to a Lambda function, between 128 MB and 10,240 MB. The Lambda console defaults new functions to the smallest setting and many developers also choose 128 MB for their functions.

https://docs.aws.amazon.com/lambda/latest/operatorguide/comp...

CPU Allocation

It is known that at 1,792 MB we get 1 full vCPU1 (notice the v in front of CPU). A vCPU is “a thread of either an Intel Xeon core or an AMD EPYC core”2. This is valid for the compute-optimized instance types, which are the underlying Lambda infrastructure (not a hard commitment by AWS, but a general rule).

If 1,024 MB are allocated to a function, it gets roughly 57% of a vCPU (1,024 / 1,792 ~= 0,57). It is obviously impossible to divide a CPU thread. In background, AWS is dividing the CPU time. With 1,024 MB, the function will receive 57% of the processing time. The CPU may switch to perform other tasks on the remaining 43% of the time.

The result of this CPU allocation model is: the more memory is allocated to a function, the faster it will accomplish a given task.

https://dashbird.io/knowledge-base/aws-lambda/resource-alloc...

> For reference I have a devops team in-company that deployed and maintained several AWS projects, including some serverless

Same here.

> even they were surprised at the low compute available at low RAM lambdas.

I wasn't because we measured it and based on the measurement we calculated what we want. I think it is a good approach not to assume anything.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: