Hacker News new | past | comments | ask | show | jobs | submit login

This post proves parent's point though.

You're not doing anything even remotely close to the features offered by cloud providers or even managed hosting providers.

Disaster recovery? Geographically separate redundant servers with failovers? Automated (and proven to work) backups? One-stop access control for infra maintenance? Audit controls for your database and storage objects? Tape backups?

Even today to support all those things you need a small army of specialists. Granted, a heck of a lot of things can get away with not having any of this. But the use cases are out there and hosting and maintaining all of that in-prem is another different level.

I understand your use case, but your is very, very far from the sheer and absolute complexity and features that enterprise data centers have.




> You're not doing anything even remotely close to the features offered by cloud providers or even managed hosting providers.

So what?

Who in their right mind believes in, say, you need to operate and maintain half a dozen types of RDBMS in three flavors along with two or four or eight different message brokers and your own convoluted infrastructure-as-code multiplied by three along with a repackaged FLOSS offering... And a ground station?

Let's not be mad, here. There are proper, full-blown, popular, global-scale cloud service providers. That. Only. Offer. VMs.

Are we so drunk with corporate kool-aid to believe that we are missing out because we are missing... What do you believe you're missing, actually?

I repeat: there are popular professional cloud service providers whose business consists of providing either VMs or access to bare metal. That's where real-world companies run their real-world businesses. Why are we supposed to believe that you need more to operate your own stuff?


You are assuming that that vast majority of shops have the capacity to impose a very limited number of technologies, and secure them through common best practices.

This is about as far from the truth as I have experienced in life.

Fortune 500 companies have an innumerable number of platforms for software, use hundreds of products from dozens of vendors, many dead long ago. Same thing with governments, at every level of scale. Telecoms? Utility providers? Medium-sized businesses who are not in tech? Specialist software that runs in a basement rack and that eventually gets moved to a datacenter and compliance requirements begin demanding all the bells and whistles I just mentioned.

Without a doubt there's a lot of gross compute power that lives on the VMs you just mentioned. But all their financial processing is probably about a fraction of what some AS/400 or mainframe doing a nightly batch job, with software running from decades ago and licensing costs going into 7 figures a year.

What you're asking for just doesn't exist. You can do what you're mentioning across, maybe, a single product line and a half-dozen teams. But even that company needs to use CRMs, ERPs, and custom stuff for which you cannot possibly define platform requirements on your own, limited, terms.

A customer that I used to admin their Unix servers on had software on IBM mainframes, IBM AS/400s, Solaris, AIX, two SCO Unix machines running some proprietary hardware control plane, a few thousand Windows machines, etc. You want a "real" ERP product? It's gonna run on Oracle or DB2, forget about Postgres. That app you made 15 years ago running on MySQL with the ISAM storage engine? Forget about ever upgrading that. Need to interact with banks? Holy smokes have I got bad news for you. You need software to interact with medical records that requires special legal compliance across multiple jurisdictions? Well, no one cares what that runs on as long as it keeps the millions rolling in.


>Disaster recovery? Geographically separate redundant servers with failovers? Automated (and proven to work) backups? One-stop access control for infra maintenance? Audit controls for your database and storage objects? Tape backups?

These are our dev+test setups, and we're looking far more carefully at prod for the reasons you touch on. Those aren't necessary for every project too, eg hosting computer vision demos.

For our government projects, the government hosts it on their own OpenShift cluster that they maintain (including their own data centre), due to requirements for all data to be hosted within our boarders. The OpenShift cluster I setup is no-where near as well maintained as the governments, they have multiple FTE and it runs most of the open source gov't code. They have tape backups, rolling on-call staff, public developer chat for support, the whole deal.

What I setup is far more simple. We have daily/weekly/monthly rolling backups of postgres pods. We store some backups of those on digital ocean, but that's just a cheapo litttle linux server.

But now a team of 30 developers can easily spin up their own projects using a web-based GUI from basically just providing a Dockerfile or a link to a git repo. One of the oft-touted organizational benefits of "cloud" is that you don't have to wait a week for Ops to provision a VM. We get all that.

>I understand your use case, but your is very, very far from the sheer and absolute complexity and features that enterprise data centers have.

My point is that many things people host in AWS do not need enterprise quality. If you're a startup, then almost by definition you do not need enterprise quality (though, as always, it depends). We made a tonne of savings. I'm sure many others would by self-hosting and learning a moderate amount of Linux / Kubernetes.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: