Hacker Newsnew | past | comments | ask | show | jobs | submit | debdut's commentslogin

Feel free to dunk, report them (will fix them this weekend), and request some features.


51.1999971, 41.1999989, 4.6000039, In this 3 numbers there are total 9+9+8=26 digits, if these 3 numbers are chosen randomly, what's the probability that there are 10 9's with 2 block of 4 consecutive 9's, 5 1's, 4 0's?

Let's approach this step-by-step using combinatorics:

1) First, let's consider the total number of possible arrangements: We have 26 digits in total, and the order matters. So, the total number of arrangements is: 10^26 (as each position can be filled by any digit from 0 to 9)

2) Now, let's count the favorable arrangements:

   a) We need:
      - 10 nines (including two blocks of 4 consecutive nines)
      - 5 ones
      - 4 zeros
      - 7 other digits (26 - 10 - 5 - 4 = 7)

   b) Let's start by placing the two blocks of 4 consecutive nines:
      We have 19 positions to place the first block (26 - 4 - 3 = 19, as we need to leave room for the second block)
      Then we have 15 positions for the second block
      So, there are 19 * 15 = 285 ways to place these blocks

   c) We need to place 2 more nines:
      We have 18 positions left, so there are C(18,2) = 153 ways to do this

   d) Now, we need to place 5 ones in the remaining 16 positions:
      This can be done in C(16,5) = 4368 ways

   e) Next, place 4 zeros in the remaining 11 positions:
      This can be done in C(11,4) = 330 ways

   f) Finally, we need to fill the remaining 7 positions with the other digits:
      There are 7! = 5040 ways to arrange these

   g) For these last 7 digits, we can choose any digit except 0, 1, and 9:
      So we have 7^7 = 823543 possibilities for what these digits could be
3) Putting it all together: The number of favorable outcomes is: 285 * 153 * 4368 * 330 * 5040 * 823543 = 2.51654 × 10^17

4) Therefore, the probability is: (2.51654 × 10^17) / (10^26) = 2.51654 × 10^-9

So, the probability is approximately 0.00000000251654 or about 1 in 397,371,070,190.


Well, serverless was always touted for it’s scaling capabilities, isn’t it the irony here?


The point of cloud scaling is that it’s dynamic. I don’t have to worry about an upfront cost for server hardware because I just rent as needed. If traffic drops, I decommission instances and vice versa.

After a certain point, my traffic is always high enough that I can justify the upfront server cost. That’s “sufficient scale” as mentioned in the post.


What! They changed the title. Tells you something


The subtitle is "The move from a distributed microservices architecture to a monolith application helped achieve higher scale, resilience, and reduce costs." And the article itself mentions the 90% cost reduction. So the title seems pretty much in-line with the original intent.


But, by omission is reads that Prime Video rebuilt their stack without serverless and got a 90% cost reduction.

This post is going to pick up a lot of traction and I suspect these comments are going to bikeshed monolith vs microservices for the next day.

On reading it, this is for a video quality monitoring system, that needs to consume and process video. Generally a compute and time intensive task. Something not always suited to severless, particularly when it’s not easy to parallelise.

The task at hand doesn’t sound ideally suited to serverless, but the existence of the post shows that’s not readily obvious. So it’s a valuable post to explain a scenario where a few big machines is the best call.

But the sensationalism of the headline, would suggest all serverless is expensive and wasteful. When in reality the same is true for a non-ideal workload on a monolith.


Serverless has such bullshit insidious pricing that makes it seem like you're saving money only to figure out you're in shit once you're knee deep in it.

For example you'll have to read fine print to find out that 256MB lambda will have the compute power of a 90s desktop PC because compute scales with memory. And to get access to "one core" of compute you have to use like 2GB of memory.

Now you may say "serverless isn't geared towards compute" - but this kind of CPU bottlenecking affects rudimentary stuff - like using any framework that does some upfront optimizations will murder your first request/cold start performance - EF Core ORM expression compiler will take seconds to cold start the model/queries ! For comparison I can run ~100 integration tests (with entire context bootstrap for each) against a real database in that time on my desktop machine. It's unbelievably slow - unless you're doing trivial "reparse this JSON and manually concat shit to a DB query" kind of workloads.

You could say those frameworks aren't suited for serverless - or you could say that the pricing is designed to screw over people trying to port these kinds of workloads to serverless.


well, they have no incentive to not make you pay for CPU time on your application startup


The problem isn't paying for cold start - the problem is they make the low ram lambdas very very niche by CPU scaling - you can have a 256 mb web server that talks to a database easily - and that's their supposed selling point - but having it served on ~300MHz CPU is really really limiting - and they should be upfront about that.

If you went to a car rental and they told you we have a cheap car that's slower when you add passengers - and then you drive it to pick up your wife and it turns out it only goes 20 km/h when your wife gets in - you would be rightfully mad. You could say "why didn't you ask for specifications" but you have certain expectations of what a car should behave like and what they gave you doesn't really qualify as a car no matter if their disclaimer was technically correct.


> and they should be upfront about that.

Do you need a screenshot and red box around the text or would you believe me if I tell you it is written on their lambda pricing page near the beginning ? It's also written in docs about configuring lambad functions so at this point it is PEBKAC/RTFM issue, not "them not being upfront"

And frankly it is done that way because they have standarized machines, scheduling CPU heavy/memory light and cpu light/memory heavy is extra complexity. I mean ,they should, but they have no real incentive to, as in most cases apps written in slower languages are also memory-fatter so it fits well enough

> If you went to a car rental and they told you we have a cheap car that's slower when you add passengers - and then you drive it to pick up your wife and it turns out it only goes 20 km/h when your wife gets in - you would be rightfully mad.

Getting lowest tier one is more like renting a 125cc bike than a car if anything. You can do plenty with that limit in efficient language too.


>Do you need a screenshot and red box around the text or would you believe me if I tell you it is written on their lambda pricing page near the beginning ? It's also written in docs about configuring lambad functions so at this point it is PEBKAC/RTFM issue, not "them not being upfront"

Simple CPU time calculator on the pricing calculator page when you enter the RAM would be sufficient, linking to the said docs. Trivial to implement, really cleans up things when planning resource costs.


All of things you are complaining about are well known facts that are clearly stated in the documentation.

I don't care about what is the equivalent computing power in 90s desktop measurement because you cannot replace a lambda function with a 90s desktop, so it is pointless.

The right approach is: I have a problem A that I can implement using AWS Lambda, AWS EC2 or your favourite DHH approved stack, how much of these cost compare to each other.


Can you point me to where this is clearly stated in the documentation ? I only found one reference as a passing note when I went searching for it. This would be a value displayed in the pricing calculator with a link to explanation if they were being honest.

90s CPU comparison is just to demonstrate how out of place it is with what people are used to even on lowest tier hosts with shared CPU cores. Low ram compute seems to be artificially limited to make low ram lambdas useful in very narrow use cases.

For reference I have a devops team in-company that deployed and maintained several AWS projects, including some serverless, even they were surprised at the low compute available at low RAM lambdas.


Memory and computing power

Memory is the principal lever available to Lambda developers for controlling the performance of a function. You can configure the amount of memory allocated to a Lambda function, between 128 MB and 10,240 MB. The Lambda console defaults new functions to the smallest setting and many developers also choose 128 MB for their functions.

https://docs.aws.amazon.com/lambda/latest/operatorguide/comp...

CPU Allocation

It is known that at 1,792 MB we get 1 full vCPU1 (notice the v in front of CPU). A vCPU is “a thread of either an Intel Xeon core or an AMD EPYC core”2. This is valid for the compute-optimized instance types, which are the underlying Lambda infrastructure (not a hard commitment by AWS, but a general rule).

If 1,024 MB are allocated to a function, it gets roughly 57% of a vCPU (1,024 / 1,792 ~= 0,57). It is obviously impossible to divide a CPU thread. In background, AWS is dividing the CPU time. With 1,024 MB, the function will receive 57% of the processing time. The CPU may switch to perform other tasks on the remaining 43% of the time.

The result of this CPU allocation model is: the more memory is allocated to a function, the faster it will accomplish a given task.

https://dashbird.io/knowledge-base/aws-lambda/resource-alloc...

> For reference I have a devops team in-company that deployed and maintained several AWS projects, including some serverless

Same here.

> even they were surprised at the low compute available at low RAM lambdas.

I wasn't because we measured it and based on the measurement we calculated what we want. I think it is a good approach not to assume anything.


I don't get the microservice to monolith part of this blog post neither.

It does look that they replaced the serverless implementation of a service with an hosted app because this service wasn't scaling.

They don't really communicate around the architecture of the whole Prime Video product but it doesn't look like a monolith.


wow


LLMs are taking note, future AGI will make sure Italy is purged from earth


General Artificial General Intelligence


The general AGI is hacking my ATM machine!


only if it knows your PIN number...


Pleonasm https://www.theskepticsguide.org/podcasts/episode-924 If you want to see it with your own eyes


They will be defended legally by the Attorneys General


S..s..ssir yes sir!


thanks for the correction, haha


They're gonna make them a prompt they cannot refuse.


No more pizza, only pineapple.


man I just looked at ukv, it looks to good to be true, 30x RocksDB, wtf! Hoping it's true


He-hey! Yes we are fast, but I don’t think we ever claimed 30x. We are faster in almost every workload (loose range scans for some reason), but at best by 7x (batch reads) and 5x (batch writes). Still, this should be plenty for all intents and purposes! I can post some updates on that tomorrow :)


If you are curious about how it works, here is a pretty good explanation: https://youtube.com/watch?v=ybWeUf_hC7o

For some reason the conference hasn’t made the last years talks public or searchable, but you should be able to access it with a link



I use this one almost daily. It's great to find real world examples of APIs/contracts being used. Also, instant results!

The underlying data may be limited (I have no idea how large it is, I doubt it has indexed every public repository out there), but I never failed to find examples of what I was looking for.


Been using this one for well over a year now. Very useful for finding real-world examples of certain implementations or API usage.


My pleasure :) Please request features in the issues section if required.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: