Hacker News new | past | comments | ask | show | jobs | submit | jusomg's comments login

I will only add that non-standard licenses also hurt adoption, specifically in medium/big businesses/enterprises.

Most organizations understand common open source licenses and there's usually a blank statement that allows teams to use GPL/MIT/whatever-licensed software.

Anything outside that subset of licenses (even if they're permissive, open source or whatnot) requires a legal review and a lot of people won't go through the pain of that process just to use a library/service/app. It's easier to just choose something else.


> specifically in medium/big businesses/enterprises.

This is a feature, not a bug.


If you don’t want big companies to use your code just make it GPL, which is usually banned.

> there's usually a blank statement that allows teams to use GPL/MIT/whatever-licensed software.

In my experience: MIT yes, GPL no.


Of course you reduced 90% of the cost. Most of these costs don't come from the software, but from the people and automation maintaining it.

With that cost reduction you also removed monitoring of the platform, people oncall to fix issues that appear, upgrades, continuous improvements, etc. Who/What is going to be doing that on this new platform and how much does that cost?

Now you need to maintain k8s, postgresql, elasticsearch, redis, secret managements, OSs, storage... These are complex systems that require people understanding how they internally work, how they scale and common pitfalls.

Who is going to upgrade kubernetes when they release a new version that has breaking changes? What happens when Elasticsearch decides to splitbrain and your search stops working? When the DB goes down or you need to set up replication? What is monitoring replication lag? Or even simply things like disks being close to full? What is acting on that?

I don't mean to say Heroku is fairly priced (I honestly have no idea) but this comparison is not apples to apples. You could have your team focused on your product before. Now you need people dedicated to work on this stuff.


Anything you don't know about managing these systems can be learned asking chatgpt :P

Whenever I see people doing something like this I remember I did the same when I was in 10 people startups and it required A LOT of work to keep all these things running (mostly because back then we didn't have all these cloud managed systems) and that time would have been better invested in the product instead of wasting time figuring out how these tools work.

I see value in this kind of work if you're at the scale of something like Dropbox and moving from S3 will greatly improve your bottom line and you have a team that knows exactly what they're doing and will be assigned the maintenance of this work. If this is being done merely from a cost cutting perspective and you don't have the people that understand these systems, its a recipe for disaster and once shit is on fire the people that would be assigned to "fix" the problem will quickly disappear because the "on call schedule is insane".


I bailed out of one company because even though the stack seemed conceptually simple in terms of infra (there wasn't a great deal to it), the engineering more than compensated for it. The end result was the same: non-stop crisis management, non-stop firefighting, no capacity to work on anything new, just fixing old.

All by design, really, because at that point you're not part of an engineering team you're a code monkey operating in service of growth metrics.


> and that time would have been better invested in the product instead of wasting time figuring out how these tools work

It really depends on what you're doing. Back then a lot of non-VC startups worked better and the savings possibly helped. It also helps grow the team and have less reliance on the vendor. It's long term value.

Is it really time wasted? People often go into resume building mode and do all kinds of wacky things regardless. Perhaps this just helps scratch that itch.


Definitely fine from a personal perspective and resume building, it's just not in the best interest of the business because as soon as the person doing resume building is finished they'll jump ship. I've definitely done this myself.

But i don't see this being good from a pure business perspective.


> it's just not in the best interest of the business because as soon as the person doing resume building is finished they'll jump ship. I've definitely done this myself.

I certainly hope not everyone does so. I've seen plenty of people lean choices based on resume / growth / interest than the pure good of the business but not to leave after doing so.

> But i don't see this being good from a pure business perspective.

And a business at the end of the day is operated by its people. Sure, there are a odd few that operate in good faith, but we're not robots or AI. I doubt every decision everywhere is 100% business optimal and if it's the only criteria.


> ... I remember I did the same when I was in 10 people startups and it required A LOT of work to keep all these things running...

Honest question: how long ago was that? I stepped away from that ecosystem four or so years ago. Perhaps ease of use has substantially improved?


> you also removed monitoring of the platform

You don't think they have any monitoring within Kubernetes?

I imagine they have more monitoring capabilities now than they did with Heroku.


The fact that HN seems to think this is "FUD" is absolutely wild. You just talked about (some of) the tradeoffs involved in running all this stuff yourself. Obviously for some people it'll be worth and for others not, but absolutely amazing that there are people who don't even seem to accept that those tradoffs exist!

I assume you reference my comment.

The reason I think parent comment is FUD isn't because I don't acknowledge tradeoffs (they are very real).

It's because parent comment implies that people behind "reclaim the stack" didn't account for the monitoring, people's cost etc.

Obviously any reasonable person making that decision includes it into calculation. Obviously nobody sane throws entire monitoring out of the window for savings.

Accounting for all of these it can be still viable and significantly cheaper to run own infra. Especially if you operate outside of the US and you're able to eat an initial investment.


Not your comment specifically, you're one of many saying FUD.

Honestly if you accept that the comment was talking about real tradeoffs then I'm a bit baffled that you though it was FUD. It seems like an important thing to be talking about when there's a post advocating moving away from PaaS and doing it all yourself. It's great if you already knew all about all that and didn't need to discuss it, but just stare into the abyss of the other comments and you'll see that others very much don't understand those tradeoffs at all.


Exactly. It all depends on your needs and — to be honest — the quality of your sysops engineering. You may not only need dedicated sysops, but you may incur higher incidental costs with lost productivity when your solution inevitably goes down (or just from extra dev load when things are harder to use).

That said, at least in 2016 Heroku was way overpriced for high volume sites. My startup of 10 engineers w/ 1M monthly active users saved 300k+/yr switching off heroku. But we had Jerry. Jerry was a beast and did most of the migration work in a month, with some dead-simple AWS scaling. His solution lacked many of the features of Heroku, but it massively reduced costs for developers running full test stacks which, in turn increased internal productivity. And did I mention it was dead simple? It's hard to overstate how valuable this was for the rest of us, who could easily grok the inner workings and know the consequences of our decisions.

Perhaps this stack will open that opportunity to less equipped startups, but I've found few open source "drop-in replacements" to be truly drop-in. And I've never found k3 to be dead simple.


Sorry, but that's just ton of FUD. We run both private cloud and (for a few customers) AWS. Of course you have more maintenance on on-prem, but typical k8s update is maybe a few hours of work, when you know what you are doing.

Also AWS is also, complex, also requires configuration and also generates alerts in the middle of the night.

It's still a lot cheaper than managed service.


> Of course you have more maintenance on on-prem, but typical k8s update is maybe a few hours of work, when you know what you are doing.

You just mentioned one dimension of what I described, and "when you know what you are doing" is doing a lot of the heavy lifting in your argument.

> Also AWS is also, complex, also requires configuration and also generates alerts in the middle of the night.

I'm confused. So we are on agreement there?

I feel you might be confusing my point with an on-prem vs AWS discussion, and that's not it.

This is encouraging teams to run databases / search / cache / secrets and everything on top of k8s and assuming a magic k8s operator is doing the same job as a team of humans and automation managing all those services for you.


> assuming a magic k8s operator is doing the same job as a team of humans and automation managing all those services for you.

What do you think AWS is doing behind the scenes when you run Postgres RDS? It's their own equivalent of a "K8S operator" managing it. They make bold claims about how good/reliable/fault-tolerant it is, but the truth is that you can't actually test or predict its failure modes, and it can fail and fails badly (I've had it get into a weird state where it took 24h to recover, presumably once an AWS guy finally SSH'd in and fixed it manually - I could've done the same but without having to wait 24h).


Fair, but my point is that AWS has a full team of people that built and contributed to that magic box that is managing the database. When something goes wrong, they're the first ones to know (ideally) and they have a lot of know-how on what went wrong, what the automation is doing, how to remediate issues, etc.

When you use a k8s operator you're using an off the shelve component with very little idea of what is doing and how. When things go wrong, you don't have a team of experts to look into what failed and why.

The tradeoff here is obviously cost, but my point is those two levels of "automation" are not comparable.

Edit: well, when I write "you" I mean most people (me included)


> Fair, but my point is that AWS has a full team of people that built and contributed to that magic box that is managing the database.

You sure about that? I used to work at AWS, and although I wasn't on K8S in particular, I can tell you from experience that AWS is a revolving door of developers who mostly quit the instant their two-year sign-on bonus is paid out, because working there sucks ass. The ludicrous churn means there actually isn't very much buildup of institutional knowledge.


> Fair, but my point is that AWS has a full team of people that built and contributed to that magic box that is managing the database

You think so. The real answer is maybe maybe not. They could have all left and the actual maintainers now don't actually know the codebase. There's no way to know.

> When things go wrong, you don't have a team of experts to look into what failed and why.

I've been on both sides of consulting / managed services teams and each time the "expert" was worse than the junior. Sure, there's some luck and randomness but it's not as clear cut as you make it.

> and they have a lot of know-how on what went wrong, what the automation is doing, how to remediate issues, etc.

And to continue on the above I've also worked at SaaS/IaaS/PaaS where the person on call doesn't know much about the product (not always their fault) and so couldn't contribute much on incident.

There's just to much trust and good faith in this reply. I'm not advocating to manage everything yourself but yes, don't trust that the experts have everything either.


If you don't want complexity of operators, you'll be probably OK with DB cluster outside of k8s. They're quite easy to setup, automate and there are straightforward tools to monitor them (eg. from Percona).

If you want to fully replicate AWS it may be more expensive than just paying AWS. But for most use cases it's simply not necessary.


As with everything it's not black or white, but rather a spectrum. Sure, updating k8s is not that bad, but operating a distributed storage solution is no joke. Or really anything that requires persistence and clustering (like elastic).

You can also trade operational complexity for cash via support contracts and/or enterprise solutions (like just throwing money at Hitachi for storage rather than trying to keep Ceph alive).


If you don't need something crazy you can just grab what a lot of enterprises already had done for years, which is drop a few big storage servers and call it a day, connecting over iSCSI/NFS/whatever

If you are in Kubernetes land you probably want object storage and some kind of PVC provider. Not thaaat different from an old fashioned iSCSI/NFS setup to be honest, but in my experience different enough to cause friction in an enterprise setting. You really don't want a ticket-driven, manual, provisioning process of shares

a PVC provider is nice, sure, but depending on how much you need/want simplest cases can be "mount a subdirectory from common exported volume", and for many applications ticket-based provisioning will be enough.

That said on my todo-list is some tooling to make simple cases with linux NFS or SMI-capable servers work as PVC providers.


Sure, but it requires that your engineers are vertically capable. In my experience, about 1 in 5 developers has the required experience and does not flat out refuse to have vertical responsibility over their software stack.

And that number might be high, in larger more established companies there might be more engineers who want to stick to their comfort bubble. So many developers reject the idea of writing SQL themselves instead of having the ORM do it, let alone know how to configure replication and failover.

I'd maybe hire for the people who could and would, but the people advocating for just having the cloud take care of these things have a point. You might miss out on an excellent application engineer, if you reject them for not having any Linux skills.


Our devs are responsible for their docker image and the app. Then other team manages platform. You need some level of cooperation of course, but none of the devs cares too much about k8s internals or how the storage works.

Original creator and maintainer of Reclaim the Stack here.

> you also removed monitoring of the platform

No we did not: Monitoring: https://reclaim-the-stack.com/docs/platform-components/monit...

Log aggregation: https://reclaim-the-stack.com/docs/platform-components/log-a...

Observability is on the whole better than what we had at Heroku since we now have direct access to realtime resource consumption of all infrastructure parts. We also have infinite log retention which would have been prohibitively expensive using Heroku logging addons (though we cap retention at 12 months for GDPR reasons).

> Who/What is going to be doing that on this new platform and how much does that cost?

Me and my colleague who created the tool together manage infrastructure / OS upgrades and look into issues etc. So far we've been in production 1.5 years on this platform. On average we spent perhaps 3 days per month doing platform related work (mostly software upgrades). The rest we spend on full stack application development.

The hypothesis for migrating to Kubernetes was that the available database operators would be robust enough to automate all common high availability / backup / disaster recovery issues. This has proven to be true, apart from the Redis operator which has been our only pain point from a software point of view so far. We are currently rolling out a replacement approach using our own Kubernetes templates instead of relying on an operator at all for Redis.

> Now you need to maintain k8s, postgresql, elasticsearch, redis, secret managements, OSs, storage... These are complex systems that require people understanding how they internally work

Thanks to Talos Linux (https://www.talos.dev/), maintaining K8s has been a non issue.

Running databases via operators has been a non issue, apart from Redis.

Secret management via sealed secrets + CLI tooling has been a non issue (https://reclaim-the-stack.com/docs/platform-components/secre...)

OS management with Talos Linux has been a learning curve but not too bad. We built talos-manager to manage bootstrapping new nodes to our cluster straight forward (https://reclaim-the-stack.com/docs/talos-manager/introductio...). The only remaining OS related maintenance is OS upgrades, which requires rebooting servers, but that's about it.

For storage we chose to go with simple local storage instead of complicated network based storage (https://reclaim-the-stack.com/docs/platform-components/persi...). Our servers come with datacenter grade NVMe drives. All our databases are replicated across multiple servers so we can gracefully deal with failures, should they occur.

> Who is going to upgrade kubernetes when they release a new version that has breaking changes?

Ugrading kubernetes in general can be done with 0 downtime and is handled by a single talosctl CLI command. Breaking changes in K8s implies changes to existing resource manifest schemas and are detected by tooling before upgrades occur. Given how stable Kubernetes resource schemas are and how averse the community is to push breaking changes I don't expect this to cause major issues going forward. But of course software upgrades will always require due diligence and can sometimes be time consuming, K8s is no exception.

> What happens when ElasticSearch decides to splitbrain and your search stops working?

ElasticSearch, since major version 7, should not enter split brain if correctly deployed across 3 or more nodes. That said, in case of a complete disaster we could either rebuild our index from source of truth (Postgres) or do disaster recovery from off site backups.

It's not like using ElasticCloud protects against these things in any meaningfully different way. However, the feedback loop of contacting support would be slower.

> When the DB goes down or you need to set up replication?

Operators handle failovers. If we would lose all replicas in a major disaster event we would have to recover from off site backups. Same rules would apply for managed databases.

> What is monitoring replication lag?

For Postgres, which is our only critical data source. Replication lag monitoring + alerting is built into the operator.

It should be straight forward to add this for Redis and ElasticSearch as well.

> Or even simply things like disks being close to full?

Disk space monitoring and alerting is built into our monitoring stack.

At the end of the day I can only describe to you the facts of our experience. We have reduced costs to cover hiring about 4 full time DevOps people so far. But we have hired 0 new engineers and are managing fine with just a few days of additional platform maintenance per month.

That said, we're not trying to make the point that EVERYONE should Reclaim the Stack. We documented our thoughts about it here: https://reclaim-the-stack.com/docs/kubernetes-platform/intro...


Since you're the original creator, can you open the site of your product, and find the link to your project that you open sourced?

- Front page links to docs and disord.

- First page of docs only has a link to discord.

- Installation references a "get started" repo that is... somehow also the main repo, not just "get started"?


The get-started repo is the starting point for installing the platform. Since the platform is gitops based, you'll fork this repo as described in: https://reclaim-the-stack.com/docs/kubernetes-platform/insta...

If this is confusing, maybe it would make sense to rename the repo to "platform" or something.

The other main component is k (https://github.com/reclaim-the-stack/k), the CLI for interacting with the platform.

We have also open sourced a tool for deploying Talos Linux on Hetzner called talos-manager: https://github.com/reclaim-the-stack/talos-manager (but you can use any Kubernetes, managed or self-hosted, so this is use-case specific)


You talk a lot about the platform on the page, in the overview page, and there are no links to the platform.

There's not even an overview of what the platform is, how everything is tied together, and where to look at it except bombastic claims, disparate descriptions of its constituent components (with barely any links to how they are used in the "platform" itself), and a link to a repo called "get-started"


Assuming average salary of 140k/year, you are dedicating 2 resources 3 times a month and this is already costing you ~38k/year on salaries alone and that's assuming your engineers have somehow mastered_both_ devops and software (very unlikely) and that they won't screw anything up. I'm not even counting the time it took you to migrate away..

This also assumes your infra doesn't grow and requires more maintenance or you have to deal with other issues.

Focusing on building features and generating revenue is much valuable than wasting precious engineering time maintain stacks.

This is hardly a "win" in my book.


Right, because your outsourced cloud provider takes absolutely zero time of any application developers. Any issue with AWS and GCP is just one magic support ticket away and their costs already includes top priority support.

Right? Right?!


Heroku isn’t really analogous to AWS and GCP. Heroku actually is zero effort for the developers.

> Heroku actually is zero effort for the developers.

This is just blatantly untrue.

I was an application developer at a place using Heroku for over four years, and I guarantee you we exceeded the aforementioned 2-devs-3-days-per-month in man hours in my time there due to Heroku:

- Matching up local env to Heroku images, and figuring out what it actually meant when we had to move off deprecated versions

- Peering at Heroku charts because lack of real machine observability, and eventually using Node to capture OS metrics and push them into our existing ELK stack because there was just no alternative

- Fighting PR apps to get the right set of env vars to test particular features, and maintaining a set of query-string overrides because there was no way to automate it into the PR deploy

I'm probably forgetting more things, but the idea that Heroku is zero effort for developers is laughable to me. I hate docker personally but it's still way less work than Heroku was to maintain, even if you go all the way down the rabbit hole of optimizing away build times et.


> Assuming average salary of 140k/year

Is that what developers at your company cost?

Just curious. In Sweden the average devops salary is around 60k.

> you are dedicating 2 resources 3 times a month and this is already costing you ~38k/year on salaries

Ok. So we're currently saving more than 400k/year on our migration. That would be worth 38k/year in salaries to us. But note that our actual salary costs are significantly lower.

> that's assuming your engineers have somehow mastered_both_ devops and software (very unlikely)

Both me and my colleague are proficient at operations as well as programming. I personally believe the skillsets are complimentary and that web developers need to get into operations / scaling to fully understand their craft. But I've deployed web sites since the 90s. Maybe I'm a of a different breed.

We achieved 4 nines of up time in our first year on this platform which is more than we ever achieved using Heroku + other managed cloud services. We won't reach 4 nines in our second year due to a network failure on Hetzner, but so far we have not had downtime due to software issues.

> This also assumes your infra doesn't grow and requires more maintenance

In general the more our infra grows the more we save (and we're still in the process of cutting additional costs as we slowly migrate more stuff over). Since our stack is automated we don't see any significant overhead in maintenance time for adding additional servers.

Potentially some crazy new software could come along that would turn out to be hard to deploy. But if it would be cheaper to use a managed option for that crazy software we could still just use a managed service. It's not like we're making it impossible to use external services by self-hosting.

Note that I wouldn't recommend Reclaim the Stack to early stage startups with minor hosting requirements. As mentioned on our site I think it becomes interesting around $5,000/month in spending (but this will of course vary on a number of factors).

> Focusing on building features and generating revenue is much valuable than wasting precious engineering time maintain stacks.

That's a fair take. But the trade-offs will look different for every company.

What was amazing for us was that the developer experience of our platform ended up being significantly better than Heroku's. So we are now shipping faster. Reducing costs by an order of magnitude also allowed us to take on data intensive additions to our product which we would have never considered in the previous deployment paradigm since costs would have been prohibitively high.


> Just curious. In Sweden the average devops salary is around 60k.

Well there's salary, and total employee cost. Now sure how it works in Sweden, but here in Belgium it's a good rule of thumb that an employer pays +- 2,5 times what an employee nets at the end after taxes etc. So say you get a net wage of €3300/month or about €40k/year ends up costing the employer about €100k.

I'm a freelance devops/sre/platform engineer, and all I can tell you is that even for long-term projects, my yearly invoice is considerably higher than that.


This is more FUD. Employer cost is nowhere near 2.5x employee wages.

Hey there, this is a comprehensive and informative reply!

I had two questions just to learn more.

* What has been your experience with using local NVMes with K8s? It feels like K8s has some assumptions around volume persistence, so I'm curious if these impacted you at all in production.

* How does 'Reclaim the Stack' compare to Kamal? Was migrating off of Heroku your primary motivation for building 'Reclaim the Stack'?

Again, asking just to understand. For context, I'm one of the founders at Ubicloud. We're looking to build a managed K8s service next and evaluating trade-offs related to storage, networking, and IAM. We're also looking at Kamal as a way to deploy web apps. This post is super interesting, so wanted to learn more.


K8s works with both local storage and networked storage. But the two are vastly different from an operations point of view.

With networked storage you get fully decoupled compute / storage which allows Kubernetes to reschedule pods arbitrarily across nodes. But the trade off is you have to run additional storage software, end up with more architectural complexity and get performance bottlenecked by your network.

Please check out our storage documentation for more details: https://reclaim-the-stack.com/docs/platform-components/persi...

> How does 'Reclaim the Stack' compare to Kamal?

Kamal doesn't really do much at all compared to RtS. RtS is more or less a feature complete Heroku alternative. It comes with monitoring / log aggregation / alerting etc. also automates High Availability deployments of common databases.

Keep in mind 37 signals has a dedicated devops team with 10+ engineers. We have 0 full time devops people. We would not be able to run our product using Kamal.

That said I think Kamal is a fine fit for eg. running a Rails app using SQLite on a single server.

> Was migrating off of Heroku your primary motivation for building 'Reclaim the Stack'?

Yes.

Feel free to join the Discord and start a conversation if you want to bounce ideas for your k8s service :)


Who says they reduced costs by cutting staff? They could instead have scaled their staff better.

>Who/What is going to be doing that on this new platform and how much does that cost?

If you're already a web platform with hired talent (and someone using Heroku for a SaaS probably already is), I'd be surprised if the marginal cost was 10x.that paid support is of course coming at a premium, and isn't too flexible on what level of support you need.

And yeah, it isn't apples to apples. Maybe you are in a low CoL area and can find a decent DevOps for 80-100k. Maybe you're in SF and any extra dev will be 250k. It'll vary immensely on cost.


This is FUD unless you're running a stock exchange or payment processor where every minute of downtime will cost you hundreds of thousands. For most businesses this is fear-mongering to keep the DevOps & cloud industry going and ensure continued careers in this field.

It's not just about downtime, but also about not getting your systems hacked, not losing your data if sh1t hits the fan, regulation compliance, flexibility (e.g. ability to quickly spin-out new test envs) etc.

My preferred solution to this problem is different, though. For most businesses, apps, a monolith (maybe with a few extra services) + 1 relational DB is all you need. In such a simple setup, many of the problems faced either disappear or get much smaller.


> also about not getting your systems hacked...

The only systems I have ever seen get compromised firsthand were in public clouds and because they were in public clouds. Most of my career has been at shops that, for one reason or another, primarily own their own infrastructure, cloud represents a rather small fraction. It's far easier to secure a few servers behind a firewall than figure out the Rube Goldberg Machine that is cloud configuration.

> not losing your data if sh1t hits the fan

You can use off-site backup without using cloud systems, you know? Backblaze, AWS Glacier, etc. are all pretty reasonable solutions. Most of the time when I've seen the need to exercise the backup strategy it's because of some software fuckup, not something like a disk dying. Using a managed database isn't going to save you when the intern TRUNCATEs the prod database on accident (and if something like that happens, it means you fucked up elsewhere).

> regulation compliance

Most shops would be way better suited to paying a payment processor like Stripe, or other equivalent vendors for similarly protected data. Defense is a whole can of worms, "government clouds" are a scam that make you more vulnerable to an unauthorized export than less.

> flexibility (e.g. ability to quickly spin-out new test envs) etc.

You actually lose flexibility by buying into a particular cloud provider, not gain it. Some things become easier, but many things become harder. Also, IME the hard part of creating reasonable test envs is configuring your edge (ingress, logging infra) and data.


Speaking of the exchanges (at least the sanely operated ones), there’s a reason the stack is simplified compared to most of what is being described here.

When some component fails you absolutely do not want to spend time trying to figure out the underlying cause. Almost all the cases you hear in media of exchange outages are due to unnecessary complexity added to what is already a remarkably complex distributed (in most well designed cases) state machine.

You generally want things to be as simple and streamlined as possible so when something does pop (and it will) your mean time to resolution is inside of a minute.


I run a business that is a long long way from a stock exchange or a payment processor. And while a few minutes of downtime is fine 30 minutes or a few hours at the wrong time will really make my customers quite sad. I've been woken in the small hours with technical problems maybe a couple of times over the last 8 years of running it and am quite willing to pay more for my hosting to avoid that happening again.

Not for Heroku, they're absolute garbage these days, but definitely for a better run PaaS.

Plenty of situations where running it yourself makes sense of course. If you have the people and the skills available (and the cost tradeoffs make sense) or if downtime really doesn't matter much at all to you then go ahead and consider things like this (or possibly simpler self hosting options, it depdns).But no, "you gotta run kubernettes yourself unless you're a stock exchange" is not a sensible position.


I don't know why people don't value their time at all. PaaS are so cheap these days for the majority of projects, that it just is not worth it to spend your own time to manage the whole infrastructure stack.

If you're forced by regulation or if you just want to do it to learn, than yeah. But if your business is not running infra, or if your infra demands aren't crazy, then PaaS and what-have-you-flavored-cloud-container products will cost you ~1-2 work weeks of a single developer annually.


Unless you already know how to run infra quickly and efficiently. Which – spoiler – you can achieve if you want to learn.

It's not FUD, it's pointing out a very real fact that most problems are not engineering problems that you can fix by choosing the one "magical" engineering solution that will work for all (or even most) situations.

You need to understand your business and your requirements. Us engineers love to think that we can solve everything with the right tools or right engineering solutions. That's not true. There is no "perfect framework." No one sized fits all solution that will magically solve everything. What "stack" you choose, what programming language, which frameworks, which hosting providers ... these are all as much business decisions as they are engineering decisions.

Good engineering isn't just about finding the simplest or cheapest solution. It is about understanding the business requirements and finding the right solution for the business.


Having managers (business people) make technical decisions based on marketing copy is how you get 10 technical problems that metastasize into 100 business problems, usually with little awareness of how we got there in the first place.

Nice straw-man. I never once suggested that business people should be making technical decisions. What I said was that engineering solutions need to serve the needs of the business. Those are insanely different statements. They are so different that I think that you actively tried to misinterpret my comment so that you could shoot down something I didn't say.

Well, you're using an overbroad definition of "business decisions", so forgive my interpretation. Of course everyone that goes on in a business could be conflated as a "business decision". But not everyone at the business is an MBA, so to speak. "Business" has particular semantics in this case, otherwise "engineering/technical" becomes an empty descriptor.

[flagged]


Not sure if this is going to help Heroku's people at all but I feel bad for them now! haha I'm not a Heroku employee. I don't even work in any sort of managed service / platform provider. This is indeed a new account but not a throwaway account! I intended to use it long term.

You really think that, incredibly lukewarm, argument for Heroku is so extreme that it could only have been written by some kind of undercover shill?

Why, yes?

Please don’t do this. It’s against HN’s guidelines.

Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

https://news.ycombinator.com/newsguidelines.html


Since DHH has been promoting the 'do-it-yourself' approach, many people have fallen for it.

You're asking the right questions that only a few people know they need answers to.

In my opinion, the closest thing to "reclaiming the stack" while still being a PaaS is to use a "deploy to your cloud account" PaaS provider. These services offer the convenience of a PaaS provider, yet allow you to "eject" to using the cloud provider on your own should your use case evolve.

Example services include https://stacktape.com, https://flightcontrol.dev, and https://www.withcoherence.com.

I'm also working on a PaaS comparison site at https://paascout.io.

Disclosure: I am a founder of Stacktape.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: