Hacker News new | past | comments | ask | show | jobs | submit login

This article may be poorly written, but if you actually read through the whole thing, it makes some brilliant points:

1. IaaS providers are incentivized to create services that lure you with managed solutions that seem like a great deal on paper, while they are actually more expensive to operate than their self-rolled alternative.

2. "DevOps" and "Infra" people charged with making these decisions often follow the industry zeitgeist, which for the moment is whatever AWS, GCP, and Azure are offering. This is in spite of the best interests of the company they work for. The right decision in so many cases is what choice one can defend when something goes wrong, or what you can parlay into the next gig. If costs, performance, and features aren't closely managed, scrutinized, compared, going with a solution like Aurora/Redshift/Dynamo won't be challenged.

3. Nothing is easily comparable because these services aren't just a CPU as you mention, instead they're hardware and software rolled into one. This is intentional, and has little to do with making things easier for you. At best, IaaS providers defend this as "differentiation" but it's closer to obfuscation of true cost. Go ahead and ask your AWS rep if use case x, y, and z will perform well on lambdas and you'll likely get a near canned response that they all fit perfectly, even if y is a job with high load spread uniformly through time. The only way you can make this comparison is by creating the two services, standing them up in production for a week and checking the bill. In other cases such as FPGAs and databases, you'll have a much harder time as not only is the software/hardware offering unique, but the user code will be completely different and require entire rewrites to get what _might_ be a fair comparison.




Anecdotally, my company spends several millions a year on AWS, and they do it mostly so they don’t have to think too much about the hardware every team uses to implement their solutions. I’d say the strategy is an unmitigated success, even if it costs them 3 times more than a similar on-premise (or dedicated) solution would cost.

My last company went the dedicated route, and they were perpetually in need of more servers because nobody was willing to pre-plan capacity. To the point that the database team just couldn’t provision any new databases for a year.


My last company, went the dedicated route, had management that understood how much of a ripoff AWS/Azure/GCP was and hired people who know how a lot of apps are built and run, and didn't need to redevelop them to fit the architecture which costs a metric ton more with little benefit. Press releases are almost always complete BS from companies when it comes to tech, again not all but most, we saved x using some dubious calculation.

AWS makes some things easy sure, but at my house I have what would cost $15k a year and I spent ~1500 all-in including 10G switch(used) power use is a joke, even with expensive electricity where i am AWS still a ripoff. Same processors as AWS essentially as he was quoting, 10G to my home nodes that need it and bobs your uncle. And I do tensorflow things at home too, i bet that would cost 30k/year, i do it for $1500 BOUGHT and for another $500 for a 3080 or a few in my servers and id be saving tons. Yeah it can go down, but just buy two or a rack, its going to be cheaper than AWS hosted colo. I could scale this myself to a few thousand servers, ive seen it done wrong so many times.....

The truth is more like he mentioned in hist first paragraph, if you have a fan boy at the helm it doesn't matter if he's CTO he knows it all, Ive worked with a lot of CTO's lately they don't have a clue and I feel sorry for them, wasting so many company assets because underlings couldn't possibly know more, i love the x google/facebook guys saying basically scrap everything re-architect its funny and so wasteful but at least they are making more money than me!

One guy said he hates Jenkins, some teams have hundreds of jobs in jenkins that work just fine, he said it should all be redeveloped, some of the stuff has been working fine for 10+ years, not sure he has company interest in mind, he want' everything to be "serverless" its the same damn thing effectively genius.

Hire some sysadmins to run your show, the good ones cost a ton, and computers at their core haven't really changed much in 20+ years nor has the fundamental way the internet works. Good experienced sysadmins can save you a ton and its just as reliable. You just have to trust experience over advertising and real math over funny math.


I wonder, what do you do if you want multi-region failover in your home setup?

I’m fully in agreement that owning your own hardware is cheaper. But if you want extreme redundancy as a company now and be mostly certain it works, you move to AWS instead of trying to build a team that will set that all up for you.

Also, when doing tensorflow things on AWS I can just temporarily rent that monster server with 12? GPU’s to train my model in a few hours, instead of waiting for my home server to do the same thing in days (even though that’s cheaper).


Stepping back though, you do make that hardware choice when you select a service from an IaaS. The actual selection is hidden from you, though. It is a tradeoff where the industry has overwhelmingly come out on one side. I think it's time to start questioning this. Instead of hiring engineers with "AWS experience" why not engineers with experience automating systems using OSS tooling?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: