Hacker News new | past | comments | ask | show | jobs | submit login
"Hetzner decided to cancel our account and terminate all servers" (mastodon.social)
421 points by unbelauscht 34 days ago | hide | past | favorite | 374 comments



Whenever I ask a CTO if they have a backup (or plan-B) they say we're on AWS, we backup there and they will never go down as a company. And then I ask them what they do when their account gets shut (e.g. because they are selling something bad on Amazon and have the same phone number as the company account?) Or the instance some years ago where GCP closed because someone had wrongly classified image on their drive?

You should have all you backups in a different location and terraform tested with a different cloud provider, otherwise you're risking the company.

[Edit] Where I come from: That doesn't say anything about Hetzner, I have been with them for 20+ years, they have stopped individual servers in that time frame, but haven't cancelled my whole account.


They've gone the route of multiple AWS accounts in my company to avoid the issue they introduced with horrible planning.

First they wanted us out of on-premise, and told us costs wouldn't matter.

Then they wanted us to be 'cloud agnostic', but when given deadlines changed to 'get it working in AWS ASAP, doesn't matter the tech debt'

Now they're freaking out about AWS costs, and we're back to juggling 'cloud agnostic' and 'reduce cost to serve in all clouds' priorities on top of features and maintenance, both of which are 10x slower due to tech debt and the plethora of bugs.

I really need to find a new job soon. Its insane how badly the execs and upper management are running this company. Every day is a knee jerk reaction from someone so detached from the reality of things or with so little understanding how it works, they do nothing but add process problems that barely address the issues they think they're solving.


The biggest issue I see here is the misguided assumption that Cloud is just automatically and unilaterally better than on-premise or professionally managed, hosted hardware. This isn't true in most cases.

There are so many providers, and therefore examples, of physical tin being accessible in under a minute with cost:hardware ratios that blow Cloud out if the sky (pun! ha!) OVH have a server for USD $95/month (with no commitments) that can be brought up and made available 120 _seconds_ that has six 3.8GHz cores, 32GB of RAM, 2x960GB NVMe SSDs, and 1Gbit/s of UNMETERED, guaranteed bandwidth... that's absolutely insane, and that's fully managed from the hardware down, so arguments like, "bUT yoU haVe to MAintAin hardWARE!" are just not true _at all_.


It was during the wave of "Moving costs from capex to opex give C levels more flexibility" movement after the initial 'cloud is better' wave. In retrospect it seems like another of their badly thought out reactions to a situation they caused by short term thinking, in this case the issues caused by trying to reduce headcount on teams supporting legacy and new physical locations while increasing the pace of new locations.

Those costs were moved and ended up higher than the capex costs were to begin with which everyone expected but the decision makers (they brushed it off every time they were asked in company Q&A's). Opex margins became a major issue and the company did performative layoffs and restructuring to appease the shareholders (then re-hired ~1/3 of the laid off staff within the next 8 months because they actually needed them)

The level of 'bad decision leading to bad decision' happening is somewhere between absurd and depressing at this point.


Good summary.

I think this all boils down to a knee-jerk reaction culture that doesn't think about the second or third degree consequences and/pr beyond the next 2-3 years.


People on HN refuse to see this being an option in these discussions. Is either "cloud" or "build and manage your own physical rack inside a colo housing"


It's wild to me how hardcoded some of these people are. I think a lot of the younger generation on here might not have experienced the "bare metal days", so they don't know how far you can push the hardware and how much you can squeeze out of it.


And frankly, how easy it is.


Precisely. Operating systems aren’t hard. They’re so easy and well established it’s crazy not to use them directly, and even though I’m not the world’s biggest Docker fan, Compose is kind of awesome to be honest. Deploying software and maintaining and OS is simple in this day and age.


Having gone from managing several thousand physical to virtual/cloud instances, there are certainly major differences and the company has to structure its approach accordingly (IMO).

On premise in my opinion needs a dedicated team managing hardware and leverage solutions to provide that as VM's/Containers/etc to teams. Another team focused on OS level security and base image, then your dev teams can effectively focus on their app and leverage the automated tools provided by the hardware and OS teams.

Cloud gives you at least half of that, or all of it depending on your approach, for a cost. There are points where the cost makes sense and times when it doesn't, and typically that changes through the life of a company. Unfortunately there is a not insignificant overhead even with current tools to maintaining a truly substrate agnostic infrastructure that can be deployed on top of multiple clouds, on-premise etc... so companies are locked in even when economics change.


> On premise in my opinion needs a dedicated team managing hardware and leverage solutions to provide that as VM's/Containers/etc to teams.

You're assuming that "On premise" equates to "inside our building, in racks we've installed, using power and networking we have to manage." You're correct if that's the case for your business, but my argument is based around the idea that you can use _managed_ hosting providers of physical hardware that'll be either next door to you, in the same city, or close to your users (i.e, you're a business in Germany but your customer base is in London, so you host the servers using a London based provider.)

The idea that you have to manage hardware is greatly diminished when you consider the availability of managed providers that are dirt cheap.


That's a good point, and at small and medium scales those are very cost effective alternatives to cloud or fully managed. Not many managed providers can provide a full equivalent to an on-premise team, and it quickly becomes cheaper to run it yourself once you scale into large dedicated instances and high network traffic. Before then though its often better than the cloud for many situations.


> On premise in my opinion needs a dedicated team managing hardware and leverage solutions to provide that as VM's/Containers/etc to teams. Another team focused on OS level security and base image, then your dev teams can effectively focus on their app and leverage the automated tools provided by the hardware and OS teams.

Exactly. At which point, you’re essentially reinventing a cloud, usually not very well. If you have access to really good people you can pull this off, and that’s why you see so many people on HN doing the “who needs cloud” flex.

But the reality is that for most companies, managing non-trivial amounts of hardware is not a core competency, and they regularly shoot themselves in the foot by trying it.


If you are in the cloud, you are going to need a team that understands cloud networking, storage, deployment, security etc. You will need enough people to maintain support rotations and survive normal churn.

It seems like many people/organizations belived that they would be rid of the whole "operations problem" once they shifted all their workloads from on-prem to cloud. They believed that they paid a full team for running cables and replacing broken fans/hard drives/PSU:s, when that aspect of on-prem is a tiny (but non-zero) amount of work.


I don't believe a lot of this is required.

OS level security? So, "apt update && apt upgrade", then? I mean, what else are you doing, writing patches for the kernel? Checking every line of code that runs? Are you aware of how effective SELinux and systemd containers are? Just a simple firewall at the OS level? Maybe even just using Tailscale (or the open source Headscale) to introduce zero trust access capabilities.

There's a Terraform provider for Proxmox, which is an excellent hypervisor. Making a template takes less than an hour with configuration.

You do need an Ops person for sure, but an entire _team_?


>"apt update && apt upgrade",

Across 10k-100k+ servers, all running services and needing to orchestrate restarting across the whole fleet, while providing 0 downtime or impact to thousands of clients with terabytes of data being processed and analyzed at any given time.

Sure whats so hard about changing a tire? Well try to do it on an 18-wheeler while its driving down the highway without any impact to its speed.

> Are you aware of how effective SELinux and systemd containers are? Just a simple firewall at the OS level?

Part of a layered and in-depth system but one that introduces complexity.

>Maybe even just using Tailscale (or the open source Headscale) to introduce zero trust access capabilities.

Tailscale in an enterprise production environment? Not going to pass any sort of security audit and probably violates a number of certifications customer require at the enterprise level for network access controls, visibility and auditing.

Just managing the git/jenkins/spinnaker/terraform infrastructure in dozens of locations deploying to and maintaining tens of thousands of servers/pods requires a 24x7 team on top of the hundreds of teams and tens of thousands of devs using it.

If you're small enough that doesn't make sense, then you might be small enough one Ops person can handle the load (One is never enough if you're smart but...), but you are dealing with a very small amount of infrastructure and services at this point.


> Across 10k-100k+ servers

If you "need" that many servers (and aren't Google), you've built your systems massively wrong.


Absolutely.

My issue is really on the other end of that scale, where getting C-suites to recognize when owning that core competency is actually beneficial to the company even if its not the focus of the company.

I grew up around companies leveraging vertical integration at the right scales to improve costs, seeing companies go the opposite direction trading all those advantages for often never-materializing benefits is... frustrating.


I’d ask, “have we worked together?” since this is a spot on description of my former employer, except it’s probably a spot on description of thousands of mid sized companies.


Same! Some execs get excited about reducing capital expenses for a data center and the teams that manage it. Some CTO gets excited about the flexibility and some legitimate benefits of cloud.

But it ends up costing a shitton of money to switch paradigms completely, and they don't switch paradigms completely for a number of years: If you're just migrating servers to ec2/vpc, you're doing cloud wrong.

Of course, there is the idea of cloud agnostic, or even multi region, which seems a challenge for most places.

At least with terraform, it is theoretically easier to swing configurations over to a different host.


At many places I've worked, there are essentially zero checks-and-balances between "Exec gets randomly excited about X" and "X becomes a mandate, with staffing, budget, and deadline." No technical vetting, feedback loop, sometimes no apparent coordination with other execs (and their random ideas). It's just: "Mike is excited about Cloud. -> We are now doing Cloud." Later, Mike gets excited about something else, and the entire team moves over to something else. "Mike is excited about AI. -> We are now doing AI."


...but the salesperson promised it would be easy, fast and low-cost! </sarcasm>


I would wager it’s not uncommon.

But also, the execs are the ones making the business risk decisions. Just make sure they have the correct info to make those decisions, the. Your responsibility is done.


I doubt responsibility is a concern, GP just doesn’t enjoy being a part of the shit show


And my core point is that most companies are shit shows. Employees know what bullet points they should have to minimize downside risk, but struggle with how to get those done while also minimizing upside risk.

In a world of scarcity, just keep communicating the tech debt. Maybe occasionally propose a project to address it.


Some people actually want to spend their time contributing to something meaningful. It also sounds like OP is worried executive incompetence might affect his job security.


Yeah, I inferred that from their post.

My point is that even “something meaningful” comes with tech debt. It’s like that at my current place.

Too many people get “grass is greener” syndrome and think that there is some magical company somewhere which gives everyone plenty of time to refactor everything and fix all of the tech debt and execs make fantastic business risk decisions which always benefit the employee. In a world of scarcity, that practically never happens.

Just weigh your options in the market. If it’s worth staying where you are, just realize that the employee is not responsible for making business risk decisions, only responsible for sufficiently informing those who do of the facts.


You're still assuming that OP, or anyone else, has the same values as you. As I said, some people want to work on something meaningful, or see the writing on the wall and want to increase their job security. It's not about the grass being greener. And sometimes, it is greener, and the only way you find out is by trying something new.


The secret is to tie the tech debt to something that business wants. If that can’t be done then you have wonder how important it really is to address the debt.


I'm not sure if its a secret but its certainly one of the most practical ways to address technical debt.

Unfortunately we're at stage they will outright ignore what they're told, and then blame engineers for not being able to do what they said they couldn't do from the start. They refuse to acknowledge their impact on creating the tech debt in the first place by poor planning and wishful but impractical timelines, so proving to them we need to tackle any part of it is a struggle without letting things degrade to the point a real customer with significant money on the line is upset enough by the state of things to tackle it.

Which ultimately means we're at the horribly dysfunctional stage of management/company growth, the question is does it continue to get worse or does the CEO eventually learn and seriously look at the effectiveness of the VP levels and make changes...


Another great question is "When did you last try to restore from a backup?" which usually is answered with "It's the built-in tooling, why would we assume it's broken?" or similar. Then fast-forward some months/years, and they try to restore from backups only to realize the backups never actually backed up what they cared about.


my dad told me about this customer that had a server that made automatic backups each Sunday night. The backup script would backup all the data then eject the tape so the manager could put it in the vault and rotate in the other one from the vault.

When the hard drive failed, they restored the customer to the latest backup. Which was the tape still sitting in the tape drive in the server. It was from the first Sunday night after the system was installed years ago


I'm confused, it sounds like you're saying the same tape was being ejected every week and then reinserted without any rotation. But in that case, shouldn't the weekly backup process have failed because the tape was full? Was nobody getting those alerts?

Or do you mean the backup process was fine, but they restored from the wrong media, a very old tape that was about to be overwritten, instead of retrieving the one with last-week's copy?


I read it as they were saying the manual part of the process never happened, so the backup from the first week was just sitting ejected forever and they had no alarming to notify them that the new backup failed to write to tape.


no, the first Sunday night after the backup process completed. It ejected the tape. It sat there for years until someone realized they needed to restore from the most recent backup. Since a new tape was never inserted, the backup was from years ago.


What happened afterwards? Was that manager still at the company, at that time?


This famously happened at GitLab: https://about.gitlab.com/blog/2017/02/01/gitlab-dot-com-data...

> Regular backups seem to also only be taken once per 24 hours, though team-member-1 has not yet been able to figure out where they are stored. According to team-member-2 these don’t appear to be working, producing files only a few bytes in size.


We've avoided that in various shops by making backups/restores part of regular maintenance processes. How do we upgrade the database? By stopping it, backing it up, restoring that to the new server, pointing all code at the new DB, then turning off the old server.

As with code deployment, it's not so scary when it's something you do so frequently that it's just a little script you run.


> it's not so scary when it's something you do so frequently

Yeah, I've found this to be the trick for ongoing hassle-free maintenance too. Make tearing stuff down and up frequent enough and you'll feel confident and safe when you're required to do so to recover from something.

Scariest are applications/services/servers that has been running for years but never restarted nor ever restored. Those scare me.


Cattle, not kittens. My favorite thing about deploying containerized apps? They’re completely fungible and I never have to care about an individual instance. Oh, it hung due to some weird network interaction? Spawn a new one, then come back to see what went wrong with this one before you kill it.


aws-cli will sync your s3 buckets to a local system.

I’m doing that to linux, and then the Linux box is furthermore backed up with nakivo.

Not my favorite but the price was okay and I can run the whole director on Linux, unlike all their other competitors. [veeam’s next major release 13 or 14 should do this in the next year or so too.]

While nakivo backs up s3 buckets, NFS shares, and local file servers… to your point, I don’t trust it (or any other backup software I can’t extract and unpack the resulting backup by hand) as far as I can throw it. So I rsync or mirror it to a local Linux box with aws-cli and then back THAT up.

I think you can do all this with windows stuff too but I don’t know it that well

Additionally you can take servers that are linux vps’es and do the reverse: mirror THEIR content to an s3 bucket.

You can also run minio open source/free on your fileserver and set up s3 to s3 sync. Cloudflare for example will ingest and replicate your minio server automatically and you can firewall it all off to their address ranges. It’s not free but it actually prices out favorably compared to veeam and nakivo if that’s all you need backed up.


A fun one like that, a few years back we had some code using dynamodb that used the automatic point-in-time backups. I asked if it had been tested, need you guess the reply?

Of course it turns out that the restore can only happen to a _new database name_ not the original, and the code had in multiple places hardcoded the assumption of what the db was called.

So restoring also involved patching the code and rolling that out; you can't "roll back" because to roll back the db the code must roll forward.


Ugh, very annoying limitation. I've written scripts to create dummy tables, restore backups and sync across to existing tables twice now.


'Roll-middle-out'


Agreed, if you haven't tested your backups recently (daily, automatic best), you don't have backups. Several of my clients (CTO Coaching) had problems in the past because they restored backups and where finding they were not complete (for various reasons).


daily test restore is infeasible for anything but toy projects. You should periodically test your restore procedures, but its incredibly costly and time consuming for sizeable platforms. Its just not that easy to restore a 10+TB backup for example, and thats a _tiny_ backup size for a b2c product.

they can easily go into the hundreds of TB, depending on your platform.

and i might add: i vividly remember gitlabs article how they have had automated backups and test restores for years, but when they actually needed them... it turned out some data wasn't part of it after all. just because youre testing your restore procedure doesnt mean you've actually accomplished anything.


Have two backups, the most recent data and everything else. Archive data to different databases, e.g. only have the most recent 6 months in a production OLTP database.

"daily test restore is infeasible for anything but toy projects. "

It probably depends on what you call "toy" project. If you work for Google, yes I think everything is a toy project, and you're right. I only worked for ~$200M ARR/1M DAU businesses and restoring was no problem. From your point working for a FAANG business it's a toy project I can see that. But there are many more "toy projects" of this kind than FAANG companies.

"10+TB backup for example, and thats a _tiny_ backup size for a b2c product."

Sure.


> but its incredibly costly and time consuming for sizeable platforms.

If your restores are too time consuming to test regularly, they sure as shit aren't going to be useful in a disaster.


Why is that? Hours or even days of downtime are still better than just losing all data. It's a simple cost-benefit analysis, and it's ok to pick different trade-offs depending on your use case


I have spent most of my career in newspaper publishing and banking.

A newspaper that doesn't publish for a few days might recover. A bank that drops off the Swift network for days isn't a bank any more.


Russia and Iran would like to have a few words with you.

Banks regularly close for multiple days for bank holidays. Unscheduled downtime is a somewhat different story, though.

Luckily, the traditional SWIFT/banking infrastructure is so negligible these days, my phone can host a classic banking infrastructure for an entire small country.


This is absolutely peak "confidently incorrect"; it's hilarious, but completely expected on this site.


This is a nonsense response.

A bank exists not as an isolated entitity, but as a node in a local, regional and global network of transactions.

Your phone as a “classic banking infrastructure” (nonsense phrase) can’t do credit card acquiring or realtime transactions because it’s not connected to the payment rails, transaction switches and so on (like SWIFT but all of the other less global ones run by central banks and private entities).

In developed societies, instant settlement for bank-bank transfers is the norm, and cash flow is dependent on that.

Russia and Iran pay about 2-5% for above-board (non-sanctioned) cross border transactions due to their extra costs of not being in SWIFT and USD sanctions, and between 20-50% where physical middle-people are needed to move pallets of USD.


It depends on the accepted downtime.


In addition to making sure it works, you should make sure you know how to deal with restoring. Sure running a command is easy but what about spinning up new infra? What if it’s corrupted? What if the one person that knows the setup is gone or asleep? Mainly a problem for smaller teams that don’t have the redundancy or resources - they really need to make sure there’s at least docs on how stuff was setup. Reminds me I need to do my yearly checkup too.


You have to test restoration as part of SOC 2, so most companies with real customers do it at least once a year.


One thing I've never figured out is, what is the difference between backups and replication? And, does restoring from backups always mean losing more _recent_ data than replication?


For hardware failure, replication is the bees knees and indeed means you'll lose less (no? depending on your replication settings) data.

But, backups will help if you replicated _bad data_, or more accurately _data changes_.

You can restore from backup if you accidentally ran `DELETE FROM foo;`, where replication will not help!

(Insert cryptolocker type viruses, bugs, human query mistakes, etc).


I imagine in that scenario the engineering team can develop inter-dimensional travel, then travel to a universe in which that command was never executed. They bring the data back and restore the database.


I managed to delete all records in a table a week ago ( I blame copilot ). Used time travel ( not quite inter-dimensional travel ) in bigquery to restore. INSERT INTO ... SELECT * FROM ... FOR SYSTEM_TIME AS OF TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 1 HOUR)


Replication isn't a backup, because if you accidentally delete a file, and that deletion is immediately replicated, then you can't get that file back.

Backups are a specific point in time.


Backup is a useless word, it's too overloaded

Are snapshots backups? snapshots on raid? snapshots on replicated disks?


A backup is something that will functionally replace the original should the original fail, regardless of how the original failed. For data, this means that the restore process is part of the backup.

Snapshots are not backups. Snapshots on RAID are not backups. Snapshots on replicated disks are probably backups, so long as the disks being replicated to are not inside the same case/building/city/continent (pick your risk suitably) and you're not able to delete the snapshots from the machine hosting the originals.

The second SIM in my phone provides a backup for my primary service provider, so long as I keep it activated. The torch in my pocket is a backup for the lighting in my house, so long as I keep it charged. My data in tarsnap is a backup, so long as I'm able to restore it. Which means data in tarsnap isn't a complete backup on its own: unless I'm able to recover the encryption key, I don't actually have a backup.


Indeed, so if it's on the same continent/building/room is it a backup? depends why you need to restore it. You can't tell whether something is a backup until it's restored, it's Schrodinger's backup.

A snapshot is a backup if a user deletes/edits their file and wants the old version. Raid is a backup if you're recovering from 1 disk failing


Backup describes what they are for. There are many ways to do backups. The main thing is that backups are archive stored away from danger. There are different kinds of danger and different need for protection.

Snapshots can be backups depending on where they are stored usually not if stored locally. For example, RDS snapshot is backup for database going down but not account being deleted or region destroyed. Generally, snapshots are way to make backups to more durable medium.


One problem with replication is if the disaster is that all the data has been deleted, that deleted state will get propagated to the replica, so you will still have no data.

But yes, if the problem is simply that the main setup is down, replication will often give you a more (or even completely) up-to-date copy than a daily backup will.


Depends on the thing you're replicating and the technology you use. If you're replicating a database you get a bunch of 'log' files containing all the changes in chronological order. While you could throw those away after filling a single replica database you also can keep them and use them to recover a database snapshot from a while ago. You're not going to get data that recent with only full backups.


A simple way to remember, I think by Devops Borat: Redundancy/Replication fix hardware problems. Backups fix stupid human problems.

> And, does restoring from backups always mean losing more _recent_ data than replication?

This depends on the archiving technology and what you're archiving.

Our file and object stores take one full backup every day. This means, we could lose up to 24 hours of data changes on these stores if something happens within these 24 hours. If this is acceptable or not depends on the RPO - the recovery point objective, or the "maximum acceptable data loss". However, especially for documents, 24 hours can be acceptable, because users and customers do tend to have files they uploaded to the system around for a few days. Especially if you have a chance to identify the lost documents.

Both on MySQL with the InnoDB driver, as well as on postgres, you can use PITR backup solutions - point in time recovery. With this, pgbackrest or e.g. xtrabackup store a full backup of the database usually once a day at our place, and then keep archiving the WAL / transaction logs of the system. And we, in turn, archive snapshots of these into the longterm archiving once a day.

If we need a restore, we'd first restore a pgbackrest or xtrabackup state from the long term archiving onto a system. And then we can use the PITR recovery mechanisms to restore at a specific point in time.

Technically, we could precisely recover down to the last transaction before the disastrous transaction to minimize data loss. In fact, I've done so one or two times after some database migrations went haywire. That involved scrolling through transaction logs with a viewer to identify when the migration tool starts running, noting down the transaction ID of the transaction tool starting it's check and then restoring to the transaction before. Very cool tbh.

This is important for an RDBMS, because the data in the relational database tends to be much more volatile than the data in a file or object store. With a filestore, users upload a file and then move it to their recycling bin or their "done" folder on the local system and can easily drag it back out tomorrow. With the database, the user spent 30 minutes to an hour writing up some text or a comment and expects it to be saved and sound once they hit "Reply". Losing this kinda data creates a lot more work & effort for our customers, because then they have to figure out what state the data is in and what to redo. This may also cause their business processes to run haywire and... it's not great.


replication is a snapshot of everything, at the time of file access.

backup is a replacement of specified files required by a system recovery procedure. it may be a total image, or a collection of config, and dat files, that are daily bootup settings,


"Amateurs backup. Professionals restore."


https://cloud.google.com/blog/products/infrastructure/detail...

Google Cloud accidentally wiped an Australian super[annuation] (pension) fund's entire cloud deployment earlier this year. I think that if you really want durable backups, they have to be reducible to object storage and put in someone else's cloud.


Thank you to share that blog post. That blog post specifically mentioned that no data was lost. I am confused by your comment about durable backups. Deeper question: Do people think on-prem backups are more reliable than cloud? I would say for 95% of orgs: no.


No data was lost because APRA rules require funds to back up across multiple clouds.


... not quite. I worked directly with the folks involved on getting more RCA details public. This customer used a single product on GCP, a specific type of VMware hosting, and the "subscription" to that product failed, which turned those resources off. It's more like turning off all their VM's, rather than deleting their entire account, identities, access structures, etc.


The reporting on that was a bid muddy with Google and Unisuper officially saying different things in different places. Regardless, calling it "more like turning off all their VM's" sounds like heavily downplaying the reality. The downtime alone confirms it was way more than that.

From their joint statement [0]:

> when the deletion of UniSuper’s Private Cloud subscription occurred, it caused deletion across both of these geographies.

> an extensive recovery of our Private Cloud which includes hundreds of virtual machines, databases and applications.

> UniSuper had backups in place with an additional service provider. These backups have minimised data loss

Strangely enough on this last point a Google blog post [1] says:

> This incident did not impact: The customer’s data backups stored in Google Cloud Storage (GCS) in the same region.

[0] https://www.unisuper.com.au/about-us/media-centre/2024/a-joi...

[1] https://cloud.google.com/blog/products/infrastructure/detail...


I agree about data backups but replicating your setup in another cloud provider is:

1) Expensive

2) Not straightforward, e.g. is there a 1:1 setup in another cloud for your system?

3) Likely to go untested and be useless when you need it most


Fully agree, that's why you need to think well first and come up with a compromise that you are willing to accept. Periodic testing of your DR procedures is non-negotiable but fortunately it's usually much simpler for smaller startups than for larger orgs.


The best part of cloud providers is that short-term VMs are relatively cheap to deploy. You don't need a full active-active failover setup, you just need to design your infrastructure in a cloud-agnostic way and test the deployment scripts a few times a year.

The most expensive part is going to be maintaining an up-to-date offsite data backup. Running a few VMs for a handful of hours is basically free.


> you just need to design your infrastructure in a cloud-agnostic way

But that's one helluva "just", and also means that you can't use the platform-specific features that make life easier. In practice that's probably way more expensive than spinning some testing VPSes up and down.


On the other hand, can you afford not to? Those platform-specific features might look tempting at first, but in reality you are often mostly acquiring a bunch of very expensive technical debt.

If Amazon decides to throw the banhammer your way, how long will it take you to retool your stack onto another cloud platform? Will your company survive if all your services are offline for a few weeks?

And if you grow beyond the startup size, can you afford being locked to proprietary technology? What are you going to do if Amazon decides to increase your prices by 100%? How are you supposed to negotiate when Amazon knows you are unable to switch to another cloud provider?


I do think (1.) depends on your company size, and business model. For most, it's cheap, e.g.

https://rsync.net/pricing.html

That said I was once a CTO for a company with 10 photo studios and we had a large amount of new (raw, DSLR) photos per minute, so cost was an issue and also upload speed for offsite backups.


My CEO has been letting the AWS bill go unpaid, apparently not understanding that our entire business and all of our IP will simply vanish if our S3 bucket gets deleted. Zero backups of any kind

I manually pulled a backup of everything but jeez, not good.


Hopefully your warnings to them are in writing, and you have enough to CYA just in case.


This is why the primary bank regulator in Australia (APRA) have insistent that banks meet their CPS230 obligations by being multicloud. There's a lot of push back on it (especially from AWS), but it's a significant risk if you're leasing all your infra.


When should one start doing this though, in a companies life cycle?

What is the most reasonable point that meets the criteria of 'as soon as possible'?

Because I imagine out of the gate doing this could be a net negative, not a net positive.

On the other hand, I'm not sufficiently well versed enough on the absolute latest devops techniques that may make this whole thing trivial, but I thought all the major cloud providers had just enough quirks in their Terraform support you can't write once standup / deploy anywhere


You should have tested backups by the time you have something running in production.

It's very easy to do if you don't do the absolute latest devops techniques.


Really thinking about testing the whole “we could migrate to any cloud on a moments notice” idea


There should just be a legal duty placed on cloud providers to not do this. Nobody would expect you to hold a second redundant commercial lease for your offices or retail location.


This isn't a great example because buildings do have accidents like fires and floods. If you need business continuity you do plan on having multiple working locations.

Of course an accident is different than just randomly terminating service.


I think this is a tough problem, partly due to the post-paid nature of most cloud services, partly due to the impact to other customers.

If you had a bunch of retailers in a shared space (like a market), and one of them was setting off fireworks, using all the power/water in the space, and scaring away customers, I'd expect them to get kicked out pretty quickly.

Now it may be that this is a false positive, I'm sure they happen, but in the case where it's a legitimate bad actor that is actively harming both the company and other customers on those servers, what's the course of action the company should take?


Shouldn't this be covered under standard tort law?


I run daily backups of our entire GSuite domain to a local RAID 5 device Everyone thinks I’m crazy


Of course, never put all your eggs in the same basket. Have a different registrar as well, and maybe a different CDN ready to go at a moment's notice.


Hi there, as some untrue news is making its way about this case: There was a notice of termination via email with a deadline in accordance with our T&C, on 30 October 2024. Our team has already been in contact with this customer several times and we also have the transmission protocol of the communication. You can all rest assured that we do not close accounts randomly. There is always a specific and legitimate reason for doing so, such as abuse of our services, not following our terms and conditions, etc. So please make sure you comply with our T&C: https://www.hetzner.com/legal/terms-and-conditions/. --Katie, Hetzner Online


Could we get some more information on the matter?

Over the past years there have been numerous people online which have claimed that Hetzner closes accounts without giving a reason. I'm sure most of those claims intentionally omit some details to make it look like they didn't infringe the T&C.

However as a Hetzner customer (a small one, to be fair), I'd still like to know that those complaints are baseless, and that I can still trust your company.


This is the relevant part from their T&C:

2.7. Furthermore, we reserve the right to terminate the contractual relationship without notice for good cause.


I think Section 8 "Use of the services / content" is likely more relevant in this case.

Given one of their goals on their homepage is "access to internet ... [due to] ... outright censorship" (which is legally required in some countries), likely they are in violation of

"8.1. The Customer is obligated to check and comply with the legal provisions arising from the use of the contractually agreed services"

and

"8.4. If we become aware of illegal activities, we are obligated under Art. 6 Abs. 1 DSA (Digital Services Act) to request that the Customer immediately removes the offending content and we are entitled to lock the Customer’s access to their Hetzner services or account."


The full part reads

Furthermore, we reserve the right to terminate the contractual relationship without notice for good cause. Such good cause is deemed to exist, among other reasons, if the Customer fails to meet its payment obligations or violates other important customer obligations. A further important reason which may result in us locking or terminating the Customer’s services or account without notice is if the Customer uses content that impairs the regular operating behavior or the security of our infrastructure or our product, or violates paragraphs 8.1. - 8.3. of these Terms and Conditions.


Hey Katie happy to hear from you. I'm glad you can finally document the communication (and not just when people start making noise on the internets). You should have our address but just in case it's been, uh, misplaced, please forward your email dated 30 October to contact @ kiwix.org

This below is where we got started - the ref number should make it easy for you to sort: > Procedure: L0020649F > Person: [redacted] / Kiwix > Cause: Hello, > > Starting this morning (December 1st at 00:00 UTC), our servers went down. > We received zero email nor notification of any kind from you. > Looking for a way to contact you, I looked into this Unlock tab that list an incident > that matches the time the problem started. > > It's been close to (12) hours already, without a single message from you. Our services > are down. > > In the Robot dashboard, there is no server listed. In the Traffic statistics page, it > says we have no IP. > In the Cloud dashboard, we cant even enter, it says Access Denied. > > What's going on? The billing page is reachable and it indicates we paid all our > invoices and the next one is to come in 5 days. So it's not a payment issue. > > I checked > https://docs.hetzner.com/robot/dedicated-server/troubleshoot... > > I am not sure if we're locked because the traceroute does not lead to > blocked.hetzner.com > Because the server is not listed, we cant use the whitelist or any other tool. > > Please restore the service immediately. > Please let us know what kind of issue there is if there is one. > > Only restoring SX65 #2453510 (135.181.224.247) is urgent. The two cloud ones can be > sorted out later.

We got two more emails from Hetzner the day after that (Monday 2) but none addressing the root issue. Our account access had been locked by then anyway so we had to call up Germany; you should be able to document that as well.

Not sure HN is the best place to compare notes but hey, happy to meet you where you feel comfortable responding.


If you agree to let them publish the email, they could post it here? As they seem keen to engage with the public over the matter.


I have already been in contact with the customer here: https://www.reddit.com/r/hetzner/comments/1ha5qgk/comment/m1... --Katie


That doesn't show any of the news being untrue as you initially claimed here.


Here https://www.reddit.com/r/hetzner/comments/1ha5qgk/comment/m1... Kiwix appears to concede they did originally send an email (by virtue of not kicking up a fuss any further). Though it's unclear if they forwarded the original with headers, or they simply sent the original message in a new email. The latter is of course proof of nothing


Kicking off customers due to vague T&C violations is on thing, but deleting their data without giving them a chance to get it out is something else entirely, especially if you only notified the customer via email without confirmation of receipt or attemps to use alternative communication methods. Is permanently deleting data as soon as services are shut off standard procedure for your company?


> deleting their data without giving them a chance to get it out

What are you talking about? The comment you're replying to says:

> There was a notice of termination via email with a deadline in accordance with our T&C, on 30 October 2024.

> Our team has already been in contact with this customer several times and we also have the transmission protocol of the communication.


Backups are irrelevent here (yes, backups are important). if hetzner really deleted production data without warning or providing a grace period for their customer to migrate their data, then they are simply not a stable foundation to build on.

I have never been a customer of google cloud for this reason and i sure as hell wont deploy new servers on hetzner until they provide a clear statement on what went wrong and what they will do to make sure they never screw up like this again.

Hetzner, the ball is in your court.


> they are simply not a stable foundation to build on.

They're a budget host, you should always proceed with caution and never rely on for production. It's the same as buying a second hand eBay server to host users upon. I learnt that the hard way.

> Hetzner, the ball is in your court.

Not really. If you read the T&C, you'll find that they can do anything with the server.

From their T&C:

2.7. Furthermore, we reserve the right to terminate the contractual relationship without notice for good cause.

--

Any server with any company can do the same. There's been numerous stories of Amazon has doing the same. Same with Google.

Unless it's colocation or where you own the hardware you can be screwed in many ways.

I would never trust a dedicated server host.


Not to dump on Hetzner here, but we have no idea if these T&C's are even applicable by local laws in Germany/Finland/Arstotzka/etc. Many less reputable companies put all kinds of unenforceable bullshit into their T&C's.


Which is one reason why companies generally don't tell you what exact terms you broke. Otherwise you might be able to build a legal claim against them.


> Not really. If you read the T&C, you'll find that they can do anything with the server.

Yes, but that doesn’t mean that they have to.


> for good cause

Did you miss this part?


I didn't post the full thing because the clarity is still vague and that it ended on the full-stop.

"Such good cause is deemed to exist, among other reasons, if the Customer fails to meet its payment obligations or violates other important customer obligations."

https://www.hetzner.com/legal/terms-and-conditions/


Right, so they can't just pull the rug from under you for any reason they like, then. "Other important customer obligations" is the important and vague part, here.


They're all this way.


> if hetzner really deleted production data without warning

Yeah, if. Why are you so certain that Hetzner "screwed up"?


Most of the times you hear people complaining about Hetzner shutting down someone's servers, it's because they were hosting content going against their ToC or similar.

But this seems to be about Kiwix (which in short is "offline Wikipedia" in various ways) and doesn't seem to be about questionable content in any way.

Eventually I guess we'll get Hetzner's perspective on this, as they tend to start writing publicly about issues once the other side starts writing publicly about it as well.

Personally I've been a happy user of Hetzner for many years, with no issues that weren't my own doing. But reading about people having their servers deleted in the middle of the night on a Sunday (Berlin time) and all data wiped immediately, with no recurse, does sound a bit aggressive. Luckily it seems like both me and Kiwix has mirrors for the data we care about.


"hosting content going against their ToC or similar"

Or hosting content that Hetzner misclassified as against their ToC. Or that they decided was because of a string in a random file name. Or, in one Mastodon instances case recently, because Hetzner saw that users could upload their own images and decided that was risky (nevermind that this is common and they have moderation and a strategy for if someone tries to host anything illegal, but that one employee reviewing it was twitchy that day and there is no recourse), etc.


Employee? I'm sure it's an "AI" script to reduce costs.


To the end user getting screwed it doesn't matter if your usage gets misclassified by an AI bot or a clueless human bot in an Asian bodyshop. Your account is still banned by that corporation either way, it doesn't matter to you why and who at the provider did it.


With these kinds of things on the rise, I'm sure "not driven by AI" is going to be a unique selling point, soon enough. Right? Or is this just wishful thinking?


Hi everyone, Our teams who review such cases do so on a case-by-case basis. We do not use AI or automated systems for these situations, but review them manually. As a general rule of thumb, we try to avoid commenting about these cases publicly. We do that so we can protect the affected customers' personal data. But this doesn't mean that we're not in contact with the customer, which we are in this case. --Katie


Good point, I wonder if you even can get a real person instead of an idiot stochastic parrot to review it anymore?


    Or, in one Mastodon instances case recently, 
    because Hetzner saw that users could upload their own images
Wait, what? Yikes. I'm planning a project like that. Do you have a link to more information?


https://woem.men/notes/9r86xd69cu89052m

Also shoutout to Cloudflare for showing off what a diverse company they are in this one /s


https://woem.men/notes/9r5bwnci8x2204it

“Actually, they’re 1000 years old”



Why is that image worse than this one?

https://files.catbox.moe/bt4j9j.png


Careful, sharing that link is illegal in some jurisdictions


The alt description doesn't do it any favours either.


I don't understand - Cloudflare forwarded the report on as usual, what would you want them to do instead?


That wasn't even the one I was thinking of; sounds like this has been happening a lot.


Damn, thank you thank you thank you. That is ultra messed up.


> Besides Wikipedia, content from the Wikimedia Foundation such as Wikisource, Wikiquote, Wikivoyage, Wikibooks, and Wikiversity are also available for offline viewing in various different languages. [0]

> Users first download Kiwix (or a browser extension), then download content for offline viewing with Kiwix. [1]

> Our main storage backend became entirely unreachable. For the average user that meant not being able to access the library and download files, and for us that meant not being able to connect to it and see what was wrong. [2]

Maybe some odd photos landed on WikiMedia which then got automatically synced to Hetzner's servers and then triggered some alarms.

I can't judge about Hetzner deleting the data, but them not attempting to really get in touch with the Kiwix team -- after all they should know that they are trying to do some good in this world -- is a really horrible move. In the same category as Google blocking access to user's accounts without any word, or German companies suing security researchers for notifying them about a security flaw in their systems.

Shame on Hetzner.

[0] https://en.wikipedia.org/wiki/Kiwix#Available_content

[1] https://en.wikipedia.org/wiki/Kiwix#Description

[2] https://mastodon.social/@kiwix/113622081750449356


keep in mind the source for "they did not reach out" is a random guy on the internet, who has to find an excuse why his service was down for 3 days.

"the hoster deleted my stuff without warning" is up there with "the dog ate my homework"


My experience is the opposite. They are completely deaf when it comes to reports about ToC violations. You need a lawyer to get them to take anything illegal down.


That's an interesting perspective for sure, thanks for sharing that.

On one hand you have these comments in this submission, saying Hetzner is too trigger-happy and takes down things too quickly. On the other hand, you have people like you using the process from the other side who feel like nothing is being done and it takes forever to get through them when needed.

I feel like it's very hard to have a balanced perspective unless you have experience of both sides of the process, which unfortunately I'm guessing most people are missing. I certainly am, as I've never tried to get someone else's servers taken down on Hetzner, so I have no idea how that process works, I've only ever been on the receiving side.


These perspectives are not opposing. Sounds very much like they have AI or unqualified humans manking final decisions on abuse reports/scans and then refuse to reevaluate when the customer complains. So just like most tech companies.


>But reading about people having their servers deleted in the middle of the night on a Sunday (Berlin time) and all data wiped immediately, with no recurse, does sound a bit aggressive.

There are several comments under this thread from people reporting essentially that happening to them.


maybe read again: the other posts mention accounts being denied before anything is created, or customers being informed up front and given lead time to move.

OP claims everything got immediately wiped without warning. That would be against Hetzner's own TOS.

OP also doesn't elaborate further, and is posting this in a position where he has to explain his own downtime of multiple days. Make your own judgmenent what is realistic here


Why did you create an account just to defend Hetzner?


Hang on, Hetzner literally deleted all their data without warning?

That’s actually insane and business killing. Both for Hetzner’s reputation and potentially for their customer.


This happens literally all the time with Hetzner, I can't tell you how many times I've heard some variation of this story (or seen it here on HN), but they're cheap, and most people aren't going to find the people complaining online about it even if they do actually try to find out more about the company, so I'm afraid it hasn't hurt them much.


They are great for throwaway hobbyist side projects where you don't want to worry about AWS billing horror stories or more expensive offerings like Digital Ocean or Linode.

I would not recommend them for a serious, money-on-the-table business.


I only use them for money-making projetcs. Based on my own experience and what I read online, you need to be careful with:

* crypto mining (I used it when it wasn't causing much trouble but I noticed my nodes were constantly attacked at a ratio I newer saw for other servers); IIRC Hetzner's current ToS forbid crypto mining

* things in legally grey area which might be legal in some places but not so in others, especially in the EU

* protect your servers well; if you become a victim of an attack and your servers will start attacking other, Hetzner will isolate them and notify you so that you can solve the problem

Other than that, the only problems I had in the last 15 or so years are failing bare-metal components that they promptly replaced, that's all.


Their ToS forbids not just the crypto mining (that was extremely reasonable to ban ten years ago, but it's moot today) but also some arbitrary financial technologies they don't like.

So beware of their ToS.


> but it's moot today

I disagree. It's not just the nuisance of wasted clock cycles. It also makes the network a juicy target for hackers. To anyone about to reply "you don't think people hack them now?", how do you think the correlation of attack sophistication and frequency looks for a network with/without a bunch of FREE MONEY inside? :)


It's moot because it makes no sense to mine on CPU or even on an entry level GPU that Hetzner did provide at one point. You will make a couple of $/m.

Besides, no mainstream crypto is mined anymore except Bitcoin.

So moot.


Is it really moot today given the current geopolitical landscape? I would assume not given they're based in Germany.


It's moot because it makes no sense to mine on CPU or even on an entry level GPU that Hetzner did provide at one point. You will make a couple of $/m.

Besides, no mainstream crypto is mined anymore except Bitcoin.


What's in place instead of mining, nowadays? Proof of stake or sth like that?


Proof of stake, correct.


    > also some arbitrary financial technologies they don't like
Such as?


Such as storing, uploading, downloading and serving transactions in a specific particular way called blockchain or distributed ledger. They have also explicitly forbidden storing blockchain data on their servers and having anything auxiliary related to it.

It is obviously a hostile language created by lawyers who did not spend much time researching the subject.

Of course it's unenforceable in practice and that is why a hefty chunk of Ethereum nodes are hosted on Hetzner for years and years with no problems.


I've had the same experience with them and OVH. I've yet to try the other players in the market like Scaleway.


I would absolutely use Hetzner for a real money-on-the-table business. You just have to know what you are up to and do your cost-benefit analysis.

I actually moved a business of ~100 FTEs from AWS to Hetzner once. Aside from the migration cost, the price was roughly 25% of AWS.

At the end, the biggest gain was not monetary, but human. For years, that business could retain skilled engineers who had the opportunity to work close to bare metal, caring about the nitty-gritty technical details of backups, failover and high availability.

And they did not even cost much. That they had so much leeway in designing the system instead of "relying on the cloud" was a major retainer.

I left many years ago, the business switched frameworks since then but they stayed on Hetzner.

P.S. Yes, that was before Hetzner Cloud became a thing )


>but they're cheap

Maybe they're cheap for a reason.


Yes, and you have to put that in your cost-benefit analysis.


Indeed; no one ever seems to consider that before defaulting to them though :(


I didn't default to them, but did start a new project on their infrastructure.

They deleted all of my data a month in due to not beleiving my name was real, and without even bothering to contact me to verify anything. They deleted my backups as well because I was dumb enough to keep them under the same account.

I learned a valuable lesson the hard way and have improved my methods as a result, but sad that it cost me an entire month's work due to carelessness and recklessness on their part.

Sure, it's "cheap for a reason", but let's not pretend like this type of expectation is advertised, especially as many on HN tout them as a drop-in replacement for competitors.


I had actually forgotten about this, I had a friend who had the exact same thing happen (dropped because "you have to use real names" or whatever, but they did use their real name, and it wasn't even anything suspicious or weird [not that that should matter], they just have a vaguely common for Eastern Europe sounding name :S)


Guess Mr. Phuc dat Bich from Hanoi needn't bother applying.

Wonder what's the algorithm they use to know a "real name"


Maybe if it does not sound funny in German, it's a real name?


Brits choose German names when they want to be funny.. Like Klaus Hergersheimer.


That feels like a preposterous automated policy. How would you design rules for what is a real name? At least, raise it for human review and some kind of manual validation before nuking an account.


[flagged]


Ok, but please post informative comments rather than putting others down.

https://news.ycombinator.com/newsguidelines.html


> This happens literally all the time with Hetzner

[citation needed]. Even when they shut down Russian customers they gave advance warning. This is the first time I have heard of service being shut off (and data deleted) without any warning.


A lot of people suck up to them, they seem to require formal ID proof to authorize your account outside EU and they can deny you without any reason whatsoever. If you look up online about similar things happening to others, the standard response you would see is "If you don't like them, take your business elsewhere". I would rather pay more with digital ocean than to be treated like untouchable with hetzner.


> ID proof to authorize your account outside EU

They do that even within EU, and even when your credit card passes 3D secure validation.


And not just ID, but additional documents proving your address (like a utility bill, but only in some specific formats). I tried to open an account there a couple of times, but they discriminated against me based on the documents I had).


> Hang on, Hetzner literally deleted all their data without warning?

That’s what the post said. But of course we have no idea if it’s true or not. No evidence was provided, and we are only hearing one side of the story.


No evidence was provided also because they did not send any email or offer any kind of statement,, pdf, anything, to explain what they did. It was a purely silent delete.

And in that hackernews thread we have dozens of people relating similar stories.


But the thread says, “When reached, they could not explain the reason for the cancellation: Them: - We sent you an email. Us : -We did not receive it, can you please resend? Them: - We can't”

Was that on a phone call? Because if not, surely there is some record?


I'm going to review my backup strategy these holidays, and look at how much downtime my services would incur if Hetzner shuts me down.

The reality that they have this power, and that they'd delete data irretrievably, scares me.

Last year I had a misconfigured port on a Docker service, and someone was able to exploit it and run a port scanner. It was during a period that I was away from home, so if I hadn't seen their service abuse emails in time, I could have returned home after a few days to find all my data wiped out (or uptime monitors complaining).


Honestly

as much as we like to hammer on EU (lack of) companies, one potential improvement point is customer service

German companies are awful at customer service. Even within the EU


>German companies are awful at customer service. Even within the EU

one would have to reconsider a century of stereotypes if they weren't.


>German companies are awful at customer service.

True also from my experience. I've noted several potential reasons why that is from my time in Germany.

Government provided customer protection laws are quite lax and disputes tricky to win and don't represent a big enough deterrent for the scammers when they're just a slap on the wrist and therefore part of the cost of doing business. Sure, you can get sued and you loose once, but if 80 of the 100 customers you scammed don't sue you or don't win, then you're still at a net positive and therefore it's profitable to keep doing that.

Also that Germany doesn't have common law, so lawsuits aren't arbitrated based on precedent, so customers who got screwed need to sue and win individually for the same issue which is favorable for the companies doing the screwing as without the precedent of common law that minimizes their risk of loosing by slam dunk every time. Also, some German judges art just tech illiterate boomers who will throw out a case they don't even understand unless you're Axel Springer.

(some) Rental agreements, internet, telco and gym memberships are my favorite infamous examples. They're almost universally regarded as anti-consumer, with tonnes of sketchy clauses, but German lawmakers do nothing to improve that for the consumer.

Secondly, Germans aren't used to being very demanding and lighting a brand on fire on social media the way Americans/Anglophones do on Twitter when they don't like something, partly because of cultural reasons where making a fuss in public is discouraged/shamed, partly because of legal reasons where a company can sue your or at least send you scary legal letters for libel if you damage their brand online like that in Germany. Or at lest, the company can simply demand the social media platform take down the offending posts, and by German law they have to comply which the likes of Google/Meta will comply automatically without any arbitration.

Also, culturally, the conservative Germans seem to have have gaslit themselves into believing everything "Made in Germany" is perfect without fault, while everything made abroad is of poor quality or at least worthy of scrutiny, so they just default to using German products without looking across the fence to check out the foreign competition. This way of thinking is more typical of manufactured goods but not sure how much it applies to SW products and services.

Couple these with the difficulty of starting and scaling a business in Germany as a small entrepreneur and with the legal and bureaucratic hoops designed to keep foreign competitors out, mean that German companies operating in Germany who became established players, have litte incentive to improve beyond the bare minimum, so they can keep providing poor quality services while still staying in business. It's classic of an economy of well connected dinosaurs sitting on old money.


> [...] so customers who got screwed need to sue and win individually for the same issue which is favorable for the companies doing the screwing as without the precedent of common law

This is factually false.

> (some) Rental agreements, internet, telco and gym memberships are my favorite infamous examples. They're almost universally regarded as anti-consumer, with tonnes of sketchy clauses, but German lawmakers do nothing to improve that for the consumer.

Any examples here? The fact that contracts like these, if you forgot to cancel them, can only renew for one month is better than anything I've seen anywhere else. Also that you must be able to cancel anything online with the click of a button if the contract was made online. Add that to the fact that any clause is worthless if it includes something a reasonable person wouldn't expect. I don't know many countries that actually enforce this - Germany does all the time.


>Add that to the fact that any clause is worthless if it includes something a reasonable person wouldn't expect.

The problem is you always need to sue to get justice for that which means paying for lawyers and consuming time and money plus stress.


That's true in probably every jurisdiction, though? At least in Germany you can often get free legal advice for many things (Verbraucherschutz, Mietrechtsberatung etc.) and there's insurance you can buy that covers your legal fees in case you lose. And legal fees in Germany are typically not exorbitant.

(Also in some cases, it's the other way around. If your landlord wants to increase the rent it's on them to sue you if they have a valid case.)


Legal advice and reality in Germany are 2 different things. The truth is that dealing with any kind of legal situation in Germany is a huge headache and all you get in the end is to prove you are right and get what should be yours anyway, without any additional compensation for your trouble. And many companies use this to abuse the system. The landlord can steal a small part of your deposit, you can only sue. But nobody's going to go through this hell for, say, €100, so the landlord gets to keep €100. Of course you can sue, but it will cost you a lot more than 100€ (even with insurance there is usually a deductible of 300€+) and it will take at least a year. And pretty much everything works this way.


If you win the case, the landlord would have to cover your legal fees.


In theory. But if the landlord is hiding (or, more accurately, if the bailiffs don't do their job), you end up paying for everything. But good news! The court order is valid for 30 years, so you might get it all back in the end (probably not).


> The fact that contracts like these, if you forgot to cancel them, can only renew for one month is better than anything I've seen anywhere else.

Do you have a source for this? (maybe it's a new thing) Because the subject of cancelling contracts is even a meme in the German (expat) community

(of course for your standard German you need to be able to plan your life years ahead)


https://www.verbraucherzentrale.de/wissen/vertraege-reklamat...

Initial contract terms can be longer (up to 24 months) and as the site points out, the new rules only apply to new contracts, others can be up to annual.


Thanks for the link, and as I suspected, it is a very recent thing

> Regulations for fairer consumer contracts are on the 1. March 2022 came into force and ensure that you can terminate automatic contract renewals for contracts for regular goods deliveries and services (such as streaming services or magazine subscriptions) more quickly.

> In addition, the 1st was founded. July 2022 a termination button duty introduced to simplify termination processes.


> partly because of legal reasons where a company can sue your or at least send you scary legal letters for libel if you damage their brand online like that in Germany. Or at lest, the company can simply demand the social media platform take down the offending posts, and by German law they have to comply which the likes of Google/Meta will comply automatically without any arbitration.

I had Google take down my (negative but factual) review of a restaurant because of apparent "libel". There was basically no recourse (except "you can file a complaint but we'll probably ignore it"). I guess that explains why there are so many bad top rated restaurants.


Forget about restaurants. The problem is the same goes for reviews on more vital businesses like doctors.


> Government provided customer protection laws are quite lax

I have the opposite perception. Most of the customer-screwing business practices I constantly see in other countries don't exist in Germany, because nobody even dares trying them.


They're not German businesses that's why.


There is a huge amount of protection for renters, a lot of things are simply illegal to put into the rental agreement and are automatically void. I really have no idea what you're talking about here.


Yes, there are laws that nullify certain clauses in tenancy agreements, but enforcing them is another story.


There's nothing to enforce. If a clause is invalid, you can ignore it.


Oh, and there are SCHUFA and debt collectors. Let's say a clause is invalid, but they gave you a bill with a due date. You can't ignore it or you'll get a bad SCHUFA or someone from a debt collection agency will knock on your door. This happened to a friend of mine recently ;( Then it's again your problem to prove that you are right. And it's once again a bureaucratic hell.


What if it's money you've already paid that someone won't give you back (while free or paid legal advisers tell you you're right)? It all ends with "you have to go to court". Yes, there is a chance that the other side will settle out of court. But usually they don't because going to court is a very expensive and long process and they don't think you'll do it (and most don't). And then it's just endless bureaucratic hell.

And if it's about the apartment, you've probably paid a deposit. And if you ignore some of the clauses, they will probably try to get back at you and punish you by not returning the deposit (or part of it). From here - GOTO 1 ;(


In which country is any of this different? Yes, if somebody is trying to scam you you may have to be prepared to go to court. I don't understand how else you're expecting a legal system to work.


But you keep insisting that there are rules to protect you. That's not true at all. Yes, there are rules, but you always have to prove that you are right/not guilty. In Germany the system is that you are guilty until proven innocent. You always have to prove that there's a regulation that proves you're right/not guilty, not the other way around. And you can't just ignore some clauses just because you think (or some legal advice tells you) you're right. You'll just end up with a lot of problems.


> In Germany the system is that you are guilty until proven innocent.

This is total nonsense.


I've already given you many examples of how the system doesn't work in your favour by default, all the things that have happened to me or people I know in recent years. But if you want to believe in a "great" German system - that's your choice.


>and are automatically void

And yet they're still put ion the rental agreement because the landlords know they can get away with it s it's a seller's market.

>I really have no idea what you're talking about here.

Google or look on reddit posts of foreigners getting screwed in Germany.


Foreigners are typically getting screwed in Germany precisely because they don't know their rights or where to go to ask for (free) legal advice.

If your landlord puts something in the contract that is against the law you can sign it and simply ignore it.


>Foreigners are typically getting screwed in Germany precisely because they don't know their rights or where to go to ask for (free) legal advice.

Or simply because the alternative is being homeless?

And why should the default for foreigners be getting screwed?


> Or simply because the alternative is being homeless?

As I already mentioned, you can simply sign a contract and then proceed to ignore all the illegal clauses. They're not binding.

> And why should the default for foreigners be getting screwed?

People getting screwed because of them not knowing their rights is basically something that can happen in every legal system, and if people come from other countries without certain legal protections, they're more likely to not know about them. That's just a reality of life.


Only reputation Hetzner has is "cheap".


They're working hard on "volatile", too.


>Both for Hetzner’s reputation

For what now?


Yep, this is common with Hetzner and has been the case since forever. Unfortunately all the comments even suggesting that Hetzner is not good for running serious scaled businesses for this reason and many others usually get downvoted to oblivion and remain hidden.


For anyone else who needs to hear this,

Hi,

I don’t have a mastodon to reply directly to you.

But i have had some issues with content being taken down by VPS providers as well.

What I’ve found works well is to use a VPS provider that the public is unaware of. And for some time I had used OVH based on the unlimited bandwidth and the reasoning that Wikipedia and Julian assange (who have far more enemies than I ever will) were using OVH.

I don’t know if that’s true any more because I subsequently moved my content to ENS and IPFS.

Anyway regardless of where your content is actually hosted or lives,

What I had done was turn my “real” servers into content origins , which were concealed form the rest of the world and lock it down in the firewall so it could only be reached by disposable squid proxy servers with a 10-liner config file

Then I pointed DNS , cloud flare etc at the squid nodes

And couldn’t care less if they were taken down.

Because I could deploy new ones in minutes elsewhere.

I didn’t have “bad content”, just ruthless business competition that kept coming at me like Tonya Harding.

And I’m sharing because your content didn’t seem too offensive either.

In the front end VPS nodes you’d just put the real address of your content as the remote origin.

And then nobody but you will ever know where it is.

Then generally your hosting company shouldn’t be aware of what it is either unless they’re snooping around in your files, and if they are, hell with them too.

You’re welcome to pass this along as a remark on avoiding censorship, or keep it to yourslelf as proprietary information I don’t mind. Let me know if you want or need an example squid conf. It’s seriously 10 lines at most and many examples found on google.


But then you need twice the bandwidth (one for egress from your "real" server, once for egress from your "front-end" server), you have a lot more latency, you've created additional points of failure, you need to sync the IPs of your "front-end" to your "real" server to allow it access. Besides that you now need to find reliable hosting from two providers, one for your "real" hosting and one for your "front-end" (using the same provider would just lead to the same issue as in the original post).

Great if it works for you, congrats. But I don't think this solves issues for many people, I doubt it solves an actual issue for you and it's basically the same as using cloudflare/akamai/similar but with a manually setup proxy on a VPS.


It's only twice the bandwith if your content isn't cacheable.


Interesting but wouldn’t that introduce a lot of latency?


This is really great advice not just for this situation but in general really. For the proxies/what goes in front, I recommend cloudflare workers.


Doesn't CF have a service to accomplish this anyways that doesn't involve spinning up your own Worker application?


Not the first time this is happening:

  - Ask HN: Hetzner banned me with no explanation. What can I do? (https://news.ycombinator.com/item?id=32318524)
  - Hetzner didn't even provide a detailed info on why they deactivated my account (https://news.ycombinator.com/item?id=40781617)


Got the same. Glad it was early before I lost production systems.

> Dear Mr David Allison

> After reviewing your updated customer information, we have decided to deactivate your account because of some concerns we have regarding this information. Therefore, we have cancelled all your existing products and orders with us.

> Best regards

> Your Hetzner Online Team


Those spaces at the beginnings of the lines that format your text as monospaced are probably also what makes your links non-clickable. (OK, sure, you can click them -- but they don't take you anywhere.)

And why would anyone need anything but code in monospace? Please don't do that.


also negative experiences here. if they get a copyright-violation request from someone, they won't contact you about it. they'll just take your server down immediately and ask you to respond. obviously thats not a sane course of action and i cannot recommend using them for any kind of production systems.

i am always angry if i see articles about them here on HN because such a vendor should be blacklisted and not promoted.


> if they get a copyright-violation request from someone, they won't contact you about it. they'll just take your server down immediately and ask you to respond

That’s not my experience. We get these emails about once every 6 months, we act and respond, and they don’t take anything down.


> i am always angry if i see articles about them here on HN because such a vendor should be blacklisted and not promoted.

Is it possible that maybe others had a different experience than you, and those experiences are as valid as your own?

Besides, what was your website about? I've received notices I had to reply to within 24-hours, otherwise they delete the servers. But I've always replied and complied, so never had any servers deleted.


Your experience doesn’t sound a great deal better and also puts me off this provider. 24-hours is almost synonymous with no warning in my book. How many contact attempts can reasonably be made in that time?

If it’s a single email - then even if it doesn’t get caught in a Spam filter that’s still a short period of time to notice and respond when the stakes are so high.

If that email goes to junk, or you’re unwell and not checking emails as frequently (given - I assume - that many of Hetzner’s customers are individuals) or any other number of reasonable situations, you’ve effectively had no warning before service termination and deletion of data.

I don’t mind cloud providers acting on suspicious usage patterns or abuse reports but there has to be some kind of due process or it just ends up unnecessarily destroying goodwill in a brand/provider.


>If it’s a single email - then even if it doesn’t get caught in a Spam filter that’s still a short period of time to notice and respond when the stakes are so high.

What size company would you have to be where a 24 hour notice would not be problematic? I'm actually curious as to opinions here, and understand that obviously part of it is how well managed are your employee leave messaging etc.

I know one company with a very good manager and I think they would have managed it with 5 people being in the group of people who would handle this kind of thing (keeping track of all services etc. Obviously only 1-2 person does this but redundancy so it falls back when they are on vacation), slightly over 30 people in company size altogether.

If you're a startup of 3 people for example 24 hours might be game over.


it's equally stupid for a company of 10k. Even if you have poeple watching inboxes it still has to get routed up some kind of management chain before a response can be considered.


   If you're a startup of 3 people for example 24 hours might be game over.
Yeah, I was considering them for my part time projects and some small PaaS-ish stuff. Not now.

Realistically to have 24/365 email coverage you'd need like, full-time founders or at least a couple of paid employees.

For what I was considering, I will be a "founder" but I'll still be working my day job. So effectively that is > 16 hours per day (work + sleep) I need to dedicate to the day job. While I will generally be able to respond within 24 hours, I can't 100% guarantee it.


> Besides, what was your website about? I've received notices I had to reply to within 24-hours, otherwise they delete the servers. But I've always replied and complied, so never had any servers deleted.

some random app vendor didn't like the free promotion on our website https://macupdater.net/

we can delete any "offending" page within a few hours, but taking the whole server offline first and asking questions later is not OK by Hetzner.

others had better experiences and got a 24-hour timeframe. just asking but is this during business hours or can they send you a notice on saturday and you'll be offline by sunday? doesn't seem much better.


"24 hours notice before server deletion" makes them a no-go for me, then.

I was considering them for a small project, but as this project will be nobody's fulltime job, I can't guarantee that I or anybody else would necessarily see that email within 24 hours.


Interesting, and this kind of service seems fine to you? It doesn’t seem fine to me.

Even if most people will have no problem with them, I’d say that knowing how a company handles edge cases like this is much more valuable than knowing how the handle things when everything is fine.


Not my experience as well. They have previously given me 24 hours to respond, or they will remove the server.


My experience was getting the "you have 24 hours" to respond e-mail, and contacting them within 20 minutes, only to be passed around a phone system to be finally told, sorry everything has been deleted.

They offered to "recover" the account, which was basically just an account shell with my info. All of the assets and backups had been permanently erased.


Yikes. That's just about as scary.


That’s hardly any better.


DMCA safe harbor means you don't get sued for posting copyrighted content. But in return it means you get a notice you gotta take things down. If you don't take it down then it goes down the infra. You can take down a post but your hoster can't. But they can take down your server. And they must or they get fines/jail. And so they will.

Now we need to know the full story. Did you have a public DMCA takedown link and actually handle requests and the complainers just ignored that and went over your head to Herzner? or did you just wing it running a server with UGC thinking it's surely gonna be OK?

I am not saying you were wrong but you only tell a small part of the story


Tbh this is the most German shit. Germany has borderline neurotic copyright laws, so likely they are doing this to cover their asses legally. Still insane that they don't even notify you!!


When they receive a DMCA they will contact you and give you 24hours to reply and fix it. If you do not comply they will turn off your IP.

However this is more related to EU regulation rather than Hetzner itself.

Hosting things within the EU has become really tough.


> Hosting things within the EU has become really tough.

I, as a European, using mostly dedicated servers within the EU (including Hetzner) haven't noticed this at all. What are you referring to specifically?

Some "use cases" like building marketing profiles and alike certainly has gotten harder, but that's a feature so I'm guessing you're not referring to that. I don't think general "hosting things" has become any harder than before, assuming you're not trying to slurp up as much data as possible.


Which EU regulation? I don't know of any relevant to this. Only American law.


It's German not EU level, the NetzDG act has a 24 hour turnaround time for taking down content that is "clearly" illegal:

https://en.wikipedia.org/wiki/Network_Enforcement_Act

Unfortunately the act is designed to block vague categories like "hate speech" and "misinformation" and has huge fines attached, so it's designed to ensure that very trigger-happy enforcement is the only workable strategy. It was written to whack Facebook and Google primarily but it's possible that the wording also captures Hetzner, or they're worried that it might.

If they do feel they fall under it then they'd probably have to automate takedowns in response to abuse reports. As otherwise they'd need 24/7 on-call content reviewers, which goes against their low cost nature. So if this is the cause it's really an issue with German law being unfriendly to smaller/cheaper content hosters.


At least when they try to comply with NetzDG they should also try to store the deleted data for 10 weeks as per the law. That clearly didn’t happen in OP’s case, so it was either Hetzner failing to retain as required or not a NetzDG situation at all.


The NetzDG only applies to platforms, and only to ones above 2 million users.


Yes but what is a "platform"? And if you define a user as someone who connects to your servers, Hetzner certainly has more than 2M.

The questions here are rhetorical. It doesn't matter what we think the answers are. The penalties are so huge that if there's even a tiny chance of a judge disagreeing with you, then you have to take measures to avoid the risk.


That's preposterous.

But sure, maybe eating a broccoli will be construed as murder in the future, so best not eat anything at all.


> When they receive a DMCA they will contact you and give you 24hours to reply and fix it. If you do not comply they will turn off your IP.

they did NOT give any 24 hours.


It might be more related to German regulation than to EU regulation. Germany has some pretty strict laws related to speech, for example. My understanding is that Kiwix mirrored Wikipedia data on Hetzner's servers, and I'm almost 100% sure that Wikipedia contains things that are completely fine in the US, but technically illegal in Germany.

I have no idea if that was the reason, though.


"Hetzner" isn't a monolith. I have a feeling these things depend on which country your servers are in.

f.ex. the situation with egress costing money in the US, but it's free on all EU location.s


> f.ex. the situation with egress costing money in the US, but it's free on all EU location.s

Aren't you confusing Hetzner Cloud with Hetzner Robot (dedicated servers) here? AFAIK, Cloud has egress costs while Robot is usually unmetered.


In my experience they usually give a 48 or 72 hour period for the customer to respond before they take action on something like that.

They are exceptionally fast at detecting things like that though.


We always get a warning.


Hetzner froze my account because I owed them 0.02 EURO's. It was not possible to pay it with a VISA credit or VISA debit, nor Amex card. They required me to wire transfer the money. However my bank does not allow the wiring of a 0.02 EURO amount, as the amount is too low.

Out of pure spite I built my own data center.


Did you try to send them 20€ and request refund of excessive amount?


A customer shouldn't be the one going through such hops in order to "satisfy" a provider who can't bother to accept a widely used mean of payment


Sure. In theory. In real world being a customer often means adapting your expectations and finding workarounds to get what you need. If this solution could have worked, building your own data center would seem rather extreme and unpractical.


Being a customer also means making responsible choices in what companies you do business with.


How did you build your own data center? Is it in your house, or renting a place somewhere? How much did it cost?

I would be curious about any details you can share.


I self host on an NAS at home with a free Cloudfront CDN on top; it's really easy to do and for simple websites (including dynamic ones backed by an sqlite db) that don't receive excessive traffic, it works well and is almost free (since the NAS would be on in any case).

Of course it wouldn't work for all cases but I find it beats having a vps somewhere that can be taken down for no reason at all.


This is like saying "I got kicked off a plane so I bought a Civic and bolted a wing on the back". Cloud front is doing all the heavy lifting and you're not really getting data center level reliability.


Does a Civic fly when it has a wing bolted on it? Coz my setup serves pages alright.


Sure, but that is not what people will read when you say "I built my own data center".

"I setup my own server" would be a lot less misleading.


I never said I built my own data center; I think you have me confused with the OP.


Os: Proxmox

Hardware: 4x Old decommed 19" dells on Ebay with plenty of DDR4 memory, HP Proliant G10+ are also good

Ups Eaton Pro

Gigabit Fiber Internet, which is more than enough. 10-50mbit can suffice for compute nodes too.

Bought ssds and m2 storage plus some spinning ols rust drives

Temp and humidity monitoring

Google Nest Protect smoke detector

TP link 16amp smart plugs on all, to have a control plane to turn it all off remotely

Workloads:

Most are LXC

Some Docker

KVM virtual machines

Zero trust: Some Cloudflare

Tailscale

Proxmox backup server to back it all up, lots of retention

Monitoring:

Deployed remote uptime monitoring on fly.io

Read and experiment a lot Hang out on /r/homelab /r/homedatacenter and /r/selfhosted for learning, community and inspiration


I have space in my house, using up half a shed I have 1 gbit fiber


People don't like hearing this, but Hetzner support is horrible. In the two years we'd had an account with them having used numerous auctioned boxes, we had to reach out to support a handful of times, and every single time they'd started the conversation by telling us it's not their business to help us. They supposedly only help if something's broken, however when we DID run into technical issues, like NVMe's slowing down to a halt, or transient networking issues, they would go out of their way to tell us they don't give a shit.

We cancelled our account last month because of that.

I cannot imagine the world of hurt that we'd be ushered in, had they actually dropped our data wholesale like they did for OP.


They do normally send out termination mails. You can see an example of one here (note the full month notice)

https://lowendspirit.com/discussion/comment/191966#Comment_1...

Would definitely be good to hear hetzners side of the story because all the cases I’ve seen thus far turned out to be a case of initial telling being understandably upset but leaving out crucial details.

They definitely are trigger happy with telling customers to find someone else & generally don’t elaborate on why


Been seeing a lot of negative posts surrounding experiences with Hetzner of late. Definitely facing issues and losing reputation.


When you ride the cloud/ai hype wave you end up with making fast and not very healthy decisions. Same things happen with other providers so whatever is your choice you must have disaster recovery and replication in place. If you wanna go cheap just make some s3 backups on R2, BackBlaze, Wasabi.


Hetzner was always like that, even before the AI wave. They lowball people with cheap pricing and arguably this attracts a lot of "unwanted" people, so Hetzner always acted strict on such issues.

Last time I used them (pre-2020) they were going as far as requesting customer's ID and rejecting them on the basis of country of origin, and I assume this also includes facial features that may resemble "an average scammer". Obviously this did not happen to European/American IPs so they never faced such issues, and as such this practice was invisible to the world.

I can say for sure OVH and Scaleway would try to negotiate with you before erasing your data - this may have changed over the years.


    > they were going as far as requesting customer's ID and rejecting them on the basis of country of origin
Wouldn't this be required for most cloud providers? Else, how do filter out buyers from Iran, Syria, or North Korea, who are probably banned from buying your EU-based services?


I've used OVH, Scaleway, Linode and Amazon in the past, right now I use a small provider that resells Serverius and Hetzner, none of them ever asked me for my ID and all of them allow usage of VPN to sign up for service. As to payment, at least Amazon used to allow usage of debit cards in the past. Hetzner was the only provider that asked me for an ID. I'm not from the banned countries either.


Wow, that is crazy to think about. It must be so easy for Iran and NK to rent a billion hours of GPU time to simulate nukes using stolen bitcoin. Hoi. Truly dystopian!


Neither Scaleway, Oracle nor AWS ever asked me to scan an ID document to prove my identity. I presume my debit card details were sufficient.


Or Hetzner Object Storage! Was released last week and, as you'd expect, cheaper than all of the above (though r2 would be cheapest if you need a lot of bandwidth, since it's free with them)


I consider myself lucky to never run into anything like this, because Hetzner doesn't even allow me to sign up in the first place. Yep, I went as far as uploading my real US driver's license -- only because I heard good stuff about Hetzner (something I would normally never do). They are like, sorry, still can't tell if you are a robot.

Dodged a bullet.


I've had many bad experiences with Hetzner, from taking my server offline because someone posted something bad and created an Abuse report, to unwillingness to cooperate to let me keep my ipv6 subnet after a forced move of data centers, to many minor shenanigans. Oh and banning my forum account because I was defending myself against some racist accusations (he was the racist)

I am always recommending to not build on Hetzner.

Ok but on topic, who is this guy and why did they do this to him?


I can't stress how important it is to own your own hardware and colocate. Also, if you are paying for a dedicated server, you can often save money by moving to colocation.


Colo is a lot more expensive than some dedi or VM somewhere. Can you provide me with a 12 core 32gb ecc 2TB ssd for 33eur/m? I doubt it.


You have to compare the actually performance of the dedi or VM. Cheap dedis and VMs are usually old, cheap hardware with relatively bad performance. I'm running a 20 core, 96 GB ram, 8TB colocated server for $55/month.


Where are you getting colo space that cheap? I'm moving a non-profit off OVH and onto dedicated and I'm looking at $150/month Canadian for 2u + 400w.

Also, don't forget about hardware acquisition costs, upgrades over time, replacement hardware and downtime due to outages, etc.



What if you don't want to host your stuff in the same jurisdiction where you live because you don't trust your government?


There are colocation datacenters all over the world.


Sure, but if something goes wrong with your colocated server, you're supposed to fix it yourself, aren't you? So it feels kinda important that the datacenter is close enough so you could get there quickly on a short notice. I'm imagining having to fly several hours and cross borders just to replace a failed hard drive, all while your server is down.


You can use "remote hands" service to some extent. For example: http://www.he.net/tour/Fremont_2_220_Remote_Hands_Service.ht...

You could leave a stack of HDDs and other consumables in your server cabinet for them.


I have a small number of very important servers on Hetzner and stories like this make me scared, but I haven't found a cost-effective equivalent for the "Storage Box" product - real block storage. I'm paying €11 a month for 5TB of storage. Is there any competition for that?


Heh, this happened to me the other day. I had hooked it up to paypal and didn't realize it wasn't set up to autopay, so I had an outstanding balance of $8 for about a week and they nuked everything. It was just for a hobby project so it was no big deal and I'll provision a new server with them, but I'm not sure I'd use them for a serious project, even though their prices are good. Granted, if it was for a serious project, I would have spent more time and care setting stuff up.

The funny bit was I paid the invoice, and then my account remained suspended. When support finally got back to me a few days later, they said (and I quote)

  Dear Client
  We want to give you one last chance as a gesture of goodwill, so we revoked the cancellation for you.
  Kind regards
which made my account accessible again. You'd think they'd be a little lenient for new accounts where the debit is less than $10, but I guess not.


Did you get any warning emails?


8TB isn't too bad to restore. At that scale they can backup on a local drive daily for very little money


Backing up to local RAID is nice, unless you're using local RAID as your primary storage in the first place, like we do. I'd looked into using a combination of AWS S3 Glacier and FUSE (s3fs?) for rigging snapshots to S3 via btrbk but it seems the semantics of Glacier don't align all too well, and backing up 40 TB+ worth of WAL on a monthly to S3 is more expensive than it should be unless you're using that storage class.


Been a Hetzner customer for years and have considered using them for a new business project of mine. Will reconsider it partly after reading this. At least use a separate provider for backups so I can quickly recover, just in case.

Seeing it happen to a reputable project such as Kiwix [0] definitely damages my perception of Hetzner. I've read numerous complains on Reddit a few months ago but they mostly boiled down to breaching the ToS in obvious ways. Still, not giving a heads up before cancelling a service and no option to recover data is just bad business practice.

[0] (I've deployed Pi's with Kiwix in remote areas in Africa, it's an amazing project)


Having a single backup with the same provider as your compute is a bad idea, no matter the provider.

Same goes for having your domain with the compute provider.


Hi again everyone, our teams who review such cases do so on a case-by-case basis. We do not use AI or automated systems for these situations, but review them manually. As a general rule of thumb, we try to avoid commenting about these cases publicly. We do that so we can protect the affected customers' personal data. But this doesn't mean that we're not in contact with the customer, which we are in this case. --Katie, Hetzner Online


I have no idea if Hetzner actions are justified and surely they can do better to ensure there is better customer communication. The moral of the story though is - if you're using commodity hosting and Open Source software it is relatively easy for you to find another home... assuming you had good offsite backups of course

If you're locked on some proprietary services like with AWS it is much bigger issue.


This guy posted on /r/hetzner (that's Reddit) too and the "consensus" as usual was that he's doing something wrong and Hetzner wouldn't just cancel your account without a reason even if they will never tell you what the reason is.


I think Section 8 "Use of the services / content" is likely relevant in this case.

Given one of kiwix.org's goals on their homepage is "access to internet ... [due to] ... outright censorship" (which is legally required in some countries), likely they are in violation of T&C in

"8.1. The Customer is obligated to check and comply with the legal provisions arising from the use of the contractually agreed services"

and

"8.4. If we become aware of illegal activities, we are obligated under Art. 6 Abs. 1 DSA (Digital Services Act) to request that the Customer immediately removes the offending content and we are entitled to lock the Customer’s access to their Hetzner services or account."


This thread is not really interesting because we don’t have the Hetzner’s side of the story.


My rule of thumb is "if you had the opportunity to tell your side of the story but chose not to, then I'm going to accept the other party's side of the story as gospel".


Some people just don't respond, period


So? If that policy ends up damaging their reputation it is noones fault but their own.


Sometimes it's just better to rarely intervene. Yes the issue of being in control of your story is there.

But replying makes the story last longer.

And there's the saying, it's never confirmed until it's denied.

In the end, they did respond.


The lack of response from Hetzner is part of what does make it interesting.


You may have seen it, but they did respond after you wrote this (both here and on reddit):

https://news.ycombinator.com/item?id=42375229


No response usually means it's a legal case.


Or, it's 13:00 on a Monday in Berlin, customer support/PR department just got started and are working through the weekend backlog, haven't had time yet to respond in any reasonable way.


If you terminate accounts on/over a weekend, you should have support staff over the weekend.


I didn't mean to imply there is no customer support on weekends, but usually you have a weekend crew that is a lot smaller than the typical work-week crew, so there are still things to catch up on after a weekend, even with crew working weekends.


Which isn't the same thing as as public relations staff.


iirc Hetzner almost never responds to these posts. On HN or reddit.


Their loss. I'm not touching them with a ten foot pole unless they acknowledge what went wrong and what they will do to make sure they don't fuck up like this again.


They responded


All I've seen here are non-response "responses". You saying they actually responded on Reddit, or what?


Here and on Reddit


> Here and on Reddit

As I said, I haven't seen any actual responses here -- only non-response "responses". So are you saying they gave any different ones there? If so, where, specifically? Or if you're claiming any of the ones here are non-empty, which one(s)?


It makes it intriguing, not interesting.


I'm a native English speaker but i have no idea what you mean by that since the words are almost synonyms


I meant that it generates curiosity, but it does not satisfy it.


Adverse inferance. The company thinks it's better for them to keep quiet than to tell their side. That speaks volumes.


I agree and don't think you should be downvoted for this opinion. With only one side of the story it's impossible to draw any conclusions yet.

Of course there is usually a bit of a chicken and egg issue with this sort of thing. Many companies only respond at all when complaints go viral on sites such as hn.


And now we do have Hetzner replying here (and on reddit), and it sounds like they did indeed communicate the termination well in advance (but for whatever reason the OP didn't notice it at the time).

None of this is to say that Hetzner responded in an ideal manner, and whatever reasons exist for the termination are still not known, but it seems likely at least some of the OP's criticisms of the process are not valid.

Of course now it's too late as many will never come back to review this thread. As often happens on hn when somebody complains about something, the pitchforks come out before there is enough information to really understand the situation fully.


There is zero new information in the responses from Hetzner. Their claim to have sent an email but that claim was already relayed in the mastodon thread.


Did you follow the whole thread, over to the back-and-forth on reddit? Somebody from Kiwix confirms they did in fact get the email warning.

[0] https://news.ycombinator.com/item?id=42387842

[1] https://old.reddit.com/r/hetzner/comments/1ha5qgk/hetzner_ca...


[flagged]


Mastodon has nothing to do with this.


Three possibilities come to mind:

1) There is some fundamental data aspect Kiwix hasn't mentioned (or is entirely unaware of). I.e. CP or some other super illegal stuff.

2) Hetzner is profoundly incompetent, deleted production servers by accident, and the "But we sent you an email!" thing is a lie to cover up the mistake.

3) There is some kind of interaction that happened prior to this that we aren't privy to. Perhaps a series of late bills, legal threats, or some other inter-personal issue.

Predicted outcome:

I either expect Kiwix to get a knock by federal/national authorities. Or the more likely outcome in my opinion: some frustratingly vague statement by Hetzner PR about its customers being "mistaken" in regards as to why data-go-poof.

I mean seriously, let us assume it's something illegal: Sure, fine, whatever. Wouldn't it make more sense for that material to not be deleted, so whoever the guilty party is would be arrested for/prosecuted by it? Deleting the servers would be like police being informed about a murder weapon and asking the tipster to destroy the weapon before an arrest is even made. It doesn't make any sense to me. Surely if some bad thing were discovered, there would be some method to encrypt/restrict illicit material without destroying it.

Either bad blood, or unpaid bills, or simple incompetence seems like the most likely culprits to me.


Huge missed opportunity to use “name-and-fame”.


This headline could probably use more context; even from the thread, it's hard to tell who's account was cancelled or what the significance of that might be.


I don't know why this is a surprise to anyone wrt Hetzner. Users have repeatedly warned that Hetzner terminates accounts of clients that they do not like. Hetzner does this without warning, even having the audacity to send you a bill thereafter.

As an example, you run any crypto related operation, even if it's a mere 5% of your workload, you will have this happen to you. You don't even have to be hosting anything at all.


Would be good to have a fall-back solution, is there something similar in dedicated server price in the EU as Hetzner? Or does no on else come close?


OVH is comparable, as is digitalocean.

OVH had a datacenter burn down a few years ago, so think about that what you will :)

Imo it makes sense to spread out, and have backups with a different provider than the one your main servers run on. They should all over S3, which is a standardized format for easy syncing of backups


Scaleway/Dedibox/Online.net but the hardware is quite a bit older for the price.

There's also Webtropia I have used in the past, also German, without issues.

OVH would be the safest bet but their support is worst than Hetzner.


While most of these are great, none of them seem to have support for 4x NVMe drives which Hetzner has.


OVH Advanced range has support for 4x NVMEs


Ah, I see they do have that now!

But it's 1Gbit public and max 5Gbit (Plus double the price)


OVH ? Infomaniak ?


> Infomaniak

Do they do dedicated? I could only see cloud and colo on their site.


The biggest issue I have had with Hetzner was with a dedicated server. I was constantly (3 times or more a week) getting abuse messages about my MAC address not being correct:

"""" We have detected that your server is using different MAC addresses from those allowed by your Robot account.

Please take all necessary measures to avoid this in the future and to solve the issue. We also request that you send a short response to us. This response should contain information about how this could have happened and what you intend to do about it. In the event that the following steps are not completed successfully, your server can be locked at any time after DATEHERE.

How to proceed: - Solve the issue - Please note, in case you have fixed the problem, please wait at least 10 minutes before rechecking: https://abuse.hetzner.com/retries/?token=TOKENHERE - After successfully testing that the issue is resolved, send us a statement by using the following link: https://abuse.hetzner.com/statements/?token=TOKENHERE

Please visit our FAQ here, if you are unsure how to proceed: https://docs.hetzner.com/robot/dedicated-server/faq/error-fa... """

I was just using standard Docker to host a web app. No proxmox or KVM of any sort. I would just wait the 10 minutes, click their link https://abuse.hetzner.com/retries/?token=TOKENHERE, which would retry and would come back fine and my response would be "I changed nothing and the retry came back solved. I've done tcpdumps over a weeks time to see if any MAC addresses leak from the OS and none have while a similar ticket like this gets opened every couple days." The ticket would close shortly after I submitted.

I inquired to them at least twice about this and they just kept telling me I was leaking a MAC address that I wasn't allowed to even when I had proof of tcpdumps over a week time period. I found someone else who had this issue with them (most issues around this that I found were people hosting Proxmox) and they had Hetzner replace the NIC and it fixed the issue. Well, Hetzner wouldn't replace my NIC because "it was working" even though I referenced these abuse tickets. I ended up getting another dedicated server, migrated my app over there, and I haven't had issues since.

Their support is seriously not very good. Since that experience, I have had backups elsewhere and test restoring those backups regularly. The price to performance I get from them is unbeatable and like I said, I haven't had issues since getting a new machine. But, I'm definitely cautious and don't exactly trust things to not go sideways even though it's been 2 years since that experience.


I don't get why they needed to bother you about this at all.

Every half decent switch made in the last 25 years can be configured to allowlist MAC addresses. Either that, or dropping customers onto their own VLANs is the standard way of managing this.


That sucks. I was literally trying to download some files into kiwix and it didn't work.

Some of the files they host are pretty big, so maybe Hetzner just decided it wasn't worth hosting any more.

I've been using Hetzner for years though and never had an issue. But I don't get anywhere close to the 20TB traffic limit.

This reminds me that I should set up some backups though.


Unsuprising. The crypto validators in the past were the canaries in the coal mine for Hetzner and almost no-one cared when they were cancelled off of Hetzner. Now they terminated your servers and the same has happened to them.

After all, Hetzner is now priotizing shareholder value and is removing smaller customers wasting their compute resources.


How do smaller customers "waste" Hetzner's compute resources? If anything, from Hetzner's point of view, smaller customers make more efficient use of those resources than bigger customers do -- because they pay more money for the same service!


[not OP]

Crypto validators can be quite noisy neighbours which is a problem on fair use VPS

Dont think it relates to small or not


Hi there, We are a privately owned company, and do not have shareholders. --Katie, Hetzner Online


Nitpicking. You still have people who own the company (technically also shareholders...) and want profit.


> people who own the company (technically also shareholders...)

Unless they are a "GmbH & Co KG"?


So Hetzer is not recommended for business. They cancelled themselves out of professional services.


Having said that, it is not popular because it is not cost effectivr but I would not trust any hosting/cloud service and would definitely have all my business data in at least 2 different ones, run my services on two different major OSes (probably Linux and FreeBSD), as well as a separate company for 2 different domains that are known by my customers (and apps, for autoretry).

But I am not in a position to take those decisions anyway.


Curious if the affected party had a plan for this situation at beforehand.

I have stopped relying on instances being secure and map out a just-in-case strategy (that I also regularly oversee or exercise) to quickly reset/restore and get back on track.


This is not good.

It does raise an interesting question of how to reliably contact a customer if email is broken?


Of course, this is very concerning. I'll wait to see what their response is. I do understand there's many reasons to trash Hetzner as they are much much cheaper than the big 3 hyperscalers and many HN posters are employed by them.


We need to boycott hetzner.


hetzner is cheap, but cheap often has hidden costs / risks.

aws, azure, and gcp aren’t cheap , but they offer better stability—both technically and operationally.


And service.

AWS has lots of problems. But they have a team of real humans that respond to tickets around the clock and actually understand stuff. To many businesses, that's worth the extra cost clone.


I wouldn't go as far as to say GCP has service or support, though AWS definitely does.

Even if you're a nobody spending $30 a month, AWS are extremely responsive and helpful.


Yep. Obviously YMMV, but I found AWS TAM to be much better than Google Cloud's (but that's not surprising).


aws, azure and gcp aren't cheap for sure, but for sure they are doing a lot of money


Every time I read about Hetzner on HN, they are shutting down somebody's account on a whim. Why on earth do people keep choosing them?


I've been using them for years and I've never had a problem with them, and when threads like these pop up it's usually people yelling about boycotts and how it's completely crazy that this can happen while completely ignoring the response from Hetzner themselves saying they gave this customer one month notice.


Hetzner did something similar a couple of years ago, suddenly disabling 1000 Solana validators that were using their service:

https://www.theblock.co/post/182283/1000-solana-validators-g...


That doesn't sound very similar at all. Their Terms and Conditions specifically say they don't allow cryptocurrency mining or similar, so hardly surprising that they shut down something like Solana validators.


Solana validators do not perform cryptocurrency mining.


I'm well aware of this. Read the T&C and I'm sure even you can understand the intent.


Are we talking about the actual T&C or the "intent" of T&C?


"The operation of applications for mining cryptocurrencies remains prohibited. These include, but are not limited to, mining, farming and plotting of cryptocurrencies"

It's pretty clear they basically prohibit everything related to cryptocurrencies, even content, as you can see in their T&C.


I have no idea what farming and plotting are. But if they wanted to prohibit everything related to cryptocurrencies, they would not say ‘applications for mining cryptocurrencies’ but rather more definitive language such as ‘applications for operating cryptocurrencies’


Hetzner = Cloud BOFH?

That said, I host on them too. But some stuff is on nearlyfreespeech.net.


Just FYI.

I know a few others who have suffered the same from this company.


After their experience with targeted "deplatforming" rumble started its own cloud: https://www.rumble.cloud/


And their T&C have the same termination clause as anyone else.

> 6. Termination

> The Provider may terminate this Agreement at any time, for any reason, with or without notice to the Adopter. Adopter may terminate the Services upon notice to the Provider.


Ouch, that’s a bummer


hurt people hurt people.


How much of this is Hetzner's fault really vs the EU and their Digital Services "think of the children" Act which presumably makes hosting providers act in ways I'm sure were largely predicted while it was being drafted?


Close to zero from the DSA given thats how several European cloud providers have acted for the last decade.

Theres certainly other (German) legislation in place that might be relevant, but whats the point of speculating if we can’t blame the EU for it.


[flagged]


Which VPS are you using and recommend?


Not OP but I use OVH which is better value and more reliable. OVH VPS’s also include unlimited bandwidth, by comparison hetzner imposes data caps and will nickel and dime you on just about everything. I run a few bandwidth heavy applications that do 10-20tb of traffic a month. If I was using hetzner I’d effectively be flushing cash down the drain


OVH also subject to the Digital Services Act...


As well they should be


[flagged]


Kiwix does not operate Mastodon servers.


Deep. Thanks, my brain apparently does not work while I’m fasting.


could even't spell derp right. need food lol


[flagged]


Resist the urge to self-promote this way. It comes across as being very tacky.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: