Heroku is both incredibly cheap and incredibly expensive.
It's only $7/month to deploy a web application. More if you want some of the paid features and a database instance. It all works out of the box with instant deployment out of the box, it's fantastic.
Then it suddenly goes $50 more per gigabyte of RAM, which makes it massively expensive for any serious workload. It's crazy hot much they can try to charge, makes AWS looks like a bunch of clowns in comparison.
If it saves you a FTE or two from managing your own infrastructure, there is a lot of headroom before it's a losing proposition.
Which is what I think the OP misses discussing in as much detail as they could with AWS -- are there ways AWS is saving some customers developer/ops staff time over the cheaper alternatives? Cause that is often more expensive than the resources themselves. Could just be "because that's what we're familiar with" (and that's not illegitimate as cost savings), but could be actually better dashboard UIs, APIs, integrations, whatever.
[I am currently investigating heroku for our low-traffic tiny-team app, after we had our sysadmin/devops position eliminated. The performance characteristics are surprising me negatively (I can't figure out how anyone gets by with a `standard` instead of `performance` dyno, even for a small low-traffic app; currently writing up my findings for public consumption), but the value of the management they are doing so we don't have to is meeting and exceeding my expectations. (We currently manage our own AWS resources directly, so I know what it is we can't do sustainably anymore with the eliminated position, and the value we're getting from not having to do it).]
My experience - from doing devops consulting and moving clients off AWS every chance I get - is that it's bad business for devops consultants to move clients off AWS in terms of short term billable hours, because clients spent more money on me when they're on AWS. If I was after maximising billable hours in the short term, then I'd recommend AWS all the time...
As such a lot of devops consultants certainly have all the wrong incentives to recommend it, and these days most of them also lack experience of how to price out alternatives.
E.g. a typical beginners mistake is to think you'd price out a one-to-one match of servers between AWS and an alternative, but one of the benefits of picking other options is that you're able to look at your app and design a setup that fits your needs better. Network latency matters. Ability to add enough RAM to fit your database working set in RAM if at all possible matters. And so on. With AWS this is often possible but often at the cost of scaling out other things too that you don't need.
And most developers won't do it if you don't give them budget responsibility and make them justify the cost and then cut their budget.
Development teams used to AWS tends to spin up instance after instance instead of actually measuring and figuring out why they're running into limits to keep costs down. Giving dev teams control over infra without having someone with extensive operations experience is an absolute disaster if you want to manage costs.
I work for a VC now. When I evaluate the tech teams of people who apply to us, it's perfectly fine if they use AWS, but it's a massive red flag to me if they don't understand the costs and the costs of alternatives. Usually the ones who do know they're paying for speed and convenience, and have thoughts on how to cut costs by moving parts or all of their services off AWS as they scale, or they have a rationale for why their hosting is never going to be a big part of their overall cost base.
The only case where AWS is cost effective is if you get big enough to negotiate really hefty discounts. It's possible - I've heard examples.
But if you're paying AWS list prices, chances are sooner or later you'll come across a competitor that isn't.
When you're talking about clients spending more on you as a consultant when they are on AWS... compared to what? Alternatives like GCS? Or actual on-premises hardware? Or what? When you "move clients off AWS every chance you get", you are moving them to what instead?
I'm having trouble following your theory of why having clients stay on AWS ends up leading to more consultant billable hours, I think because I don't understand what alternatives you are comparing it to. I am pretty sure it is not heroku, as in the earlier part of the thread?
Or are you talking about compared to simpler "vps" hosts like, say, linode? Doesn't that require a lot more ops skillset and time to set up and run compared to aws, or you don't think it does?
When moving them off AWS it'd usually be to managed hosting on monthly contracts.
Heroku turns expensive real fast. You're paying for AWS + their margins on top of AWS.
Managed hosting ranges from API-based provisioning not much different than AWS to ordering server by server.
In practice the amount of devops time spent dealing with the server itself for me at least is generally at most matter of downloading a bootstrap script that will provision CoreOS/Flatcar and tie it into a VPN and record the details. The rest of the job can be done by simple orchestration elsewhere. I have servers I haven't needed to touch in 5 years other than recently to switch from CoreOS to Flatcar (other than that the OS auto-updates, and everything runs in containers). Once you've done that, it's irrelevant what the server is or where it is.
For modern server hardware, if you run your own colo setup, that's a matter of having PXE and tftp set up once in a colo, and you can then use an IPMI connection to do the OS installation and config remotely, so even with colocated servers, I'd typically visit the data center once or twice a year to manage several racks of servers. The occasional dead disk would be swapped by data centre staff. Everything else would typically be handled via IPMI.
E.g. one of my setups involved 1k containers across New Zealand, Germany and several colo facilities in the UK. Hetzner (Germany) was the first managed hosting provider we found that could compete on total cost ownership with leasing servers and putting them in racks in the UK. Had we been located in Germany (cheaper colo facilities than near London), they'd not been able to compete, but putting stuff in a colo facility somewhere we didn't have people nearby would be too much of a hassle and the cost difference was relatively minor.
Small parts of the bootstrap scripts we had were the only thing different between deploying into KVM VMs (New Zealand), managed servers not in the same racks (Hetzner), and colocated bare metal booting via PXE on their own physical networks (UK). Once they were tied into the VPN and the container runtime and firewall was in place, our orchestration scripts (couple of weeks of work, long before Kubernetes etc. was a thing - we were originally deploying openvz containers and so the same tool could deploy to openvz, KVM and docker over the years) would deploy VMs/containers to them, run backups and failover setups, and dynamically tie them into our frontend load balancers.
We did toy with the idea of tieing in AWS instances to that setup too, but over many years of regularly reviewing the cost we could never get AWS cheap enough to justify it. We kept trying because there was a constant stream of people in the business who believed - with no data - that it'd be cheaper, but the closest we got to with experiments with AWS was ca twice the cost.
For the record, in my current job we do use AWS entirely. I could cut the cost of what we're using it for by ~80%-90% by moving it to Hetzner. But the cost is low enough that it's not worth investing the time in doing the move at this point, and it's not likely to grow much (it's used mostly for internal services for a small team). That's the kind of scenario where AWS is great - offloading developer time on setups that are cheap to run even at AWS markups.
I tend to recommend to people that it's fine to start with AWS to deploy fast and let their dev team cobble something together. But they need to keep an eye on the bill, and have some sort of plan for how to manage the costs as their system gets more complex. That means also thinking long and hard before adding complicated dependencies on AWS. E.g. try to hide AWS dependencies behind APIs they can replace.
It's only $7/month to deploy a web application. More if you want some of the paid features and a database instance. It all works out of the box with instant deployment out of the box, it's fantastic.
Then it suddenly goes $50 more per gigabyte of RAM, which makes it massively expensive for any serious workload. It's crazy hot much they can try to charge, makes AWS looks like a bunch of clowns in comparison.