Hacker News new | past | comments | ask | show | jobs | submit login
Review of Hetzner ARM64 servers and experience of WebP cloud services on them (webp.se)
277 points by novakwok on June 17, 2023 | hide | past | favorite | 90 comments



I'm excited about arm in more places but my experience with arm and docker isn't as easy as i expected

Is it just me? When I've started using arm more, I've noticed that docker images are often incomplete or behind the x86 release cycle.

I love the ease of wiring docker images together for all my services (corollary: never having to understand the myriad packaging issues with whatever language the service is written in, python, nodejs, etc).

But when I'm using an arm image, often it is not the same version as the latest on x86, or even worse, is packaged by someone random on the internet. If I were to install the JavaScript service myself, I could audit it (not that I ever do!) by looking into the package.json file, or reviewing the source code, or whatever. There is a clear path to reviewing it. But with a docker image from the Internet, I'm not sure how I would assert it is a well behaved service. Docker itself gives me some guarantees, but it still feels less straightforward.

I've packaged things for an arm container myself and it isn't always exactly the same as for x86.

Is this just me? Am I doing it wrong on arm?


I felt this a year or two back, but today I've had as good of an experience on Docker w/ arm64 as I do w/ x86_64. I use arm64 Docker a lot since I work on a M1 MacBook.

I usually stick to the common base images, e.g. ubuntu, alpine, nodejs, golang, etc. and install based off of that. Also, I rarely write Dockerfiles these days and instead use Earthly [0], which is a tool that really shines as a CI/make alternative, but it incidentally also has a nicer syntax which makes it easier to write multi-platform Docker images.

What images or other problems have you ran into on arm64?

[0]: https://earthly.dev/


For example gitlab. The latest arm image, as far as I could tell, isn't the same as the most recent x86. And, iirc, it was from some other person, not gitlab. It's often hard to tell what you are getting when you run an image, because docker pull can pull an image that isn't a multi platform build. I've had issues where the SSL certificates don't work, and I'm assuming it is because the stack could listen on 443, but the full ssl when running on arm didn't work. I'm not sure if that was because it is emulating using Rosetta or whether the software inside the container built correctly but isn't actually running on the arm platform correctly or what. It just feels like the wild west with arm images right now. I'm sure it will get better but it is still a minority platform and that comes with those issues.

And, this might just be exposing my ignorance. Until recently I hadn't needed to use arm but now with macos it's gotten more interesting and more complicated.


> And, iirc, it was from some other person, not gitlab

That would have to be a different image, then?


Yes, my memory is a bit foggy, but it was difficult to get any of the images to work, so I started playing with other contributors. But, you are right.


> The latest arm image, as far as I could tell, isn't the same as the most recent x86. And, iirc, it was from some other person, not gitlab.

Ah, that's fair. If you're running software packaged by others, that might be less well-covered because you'll have to wait until all of the vendors you care about add in that supprort.

If you're developing software in arm64 Docker, I think that case is pretty good today.


You can inspect the layers of a Docker image. Tools like dive[0] provide a quick and easy way to navigate through the different components your image of choice is made up of.

In terms of functionality once the container is running, you'll have to put some amount of trust into the project maintainers, no more or less than the trust you need om amd64. For containers repackaged by third parties that's quite a pain, but in most cases you can get by just fine with the official container.

If your container of choice has been made by someone real fancy, you may be able to get reproducible builds for all the files inside the container. That would verify that the source and the binary match (though container metadata may not, so a direct image compare would be challenging).

[0]: https://github.com/wagoodman/dive


Dive seems to have been abandoned though.

I used it a few times in the past.


Does it no longer work? I thought I used it just fine a couple weeks ago.


It still works, until there are OCI updates that it can't handle, and there are a couple of occasional bugs, depending on the image.


> never having to understand the myriad packaging issues with whatever language the service is written in, python, nodejs, etc)

How do you fix issues with the docker images if you don't understand them?


This sounds about right to me. At work, we make a rather complex stack that uses quite a few third-party containers. When we wanted to do arm64 support a couple years ago, most of these dependencies did not support arm64, so we had to build and publish the containers ourselves. (We already sort of had to do this anyway, because customers ran into Docker rate limiting issues, and images from our account aren't rate limited because we pay them not to. But when we only supported amd64, we just re-tagged and pushed.)

As an aside, some comments in this thread say "just look at the layers", but that's the wrong level of abstraction for multi-arch images. In the past, when you ran "docker pull ..." you were looking for an Image Manifest: https://github.com/opencontainers/image-spec/blob/main/manif.... But now in the world of multi-arch, you are getting an Image Index first: https://github.com/opencontainers/image-spec/blob/main/image...


I’m not excited that much yet, because except for dumb single board computers, i still can’t get a proper arm system to run at home.

My home server (a repurposed fujitsu esprimo q920) is still intel based and it doesn’t seem to be anything available with comparable performance and connectivity. And I’m not even considering the price point.

Basically: arm cpus don’t play any significant role in my everyday computing life.

At work, I’ve been migrating all of our infra to graviton and we realised substantial savings… but then again, I don’t pay the cloud bills and my salary is still the same, so meh.


I don’t have too many gaps but also don’t use that many different base containers for security and reliability reasons. As you mentioned, I feel like in a decade the current experience of running random code from strangers all over the internet with no more protection than Docker Desktop provides is gong to sound similar to 1970s swingers’ accounts of unprotected orgies sound to all of us who grew up after HIV, etc. where people will kind of accept that it happened but be amazed that everyone was so reckless.


You're not wrong...but it will get there and get better. I believe asahi will be a driving force behind it, and arm in general being more widely used for non mobile device stuff....however (despite using fedora on arm64 as a daily driver) I firmly believe we're going to be 6-12 months absolute minimum until arm docker is "alright" (I'm also broadly including fully user transparent x86 emulation into this sweeping statement with no basis lol)


The friction with using Docker across arm and x86 was one of the big reasons that I ended up learning NixOS. Now, all the services on my personal remote box and all my one-man-SaaS services run on NixOS + systemd services and my life is so much easier and less stressful.


Sounds like a docker issue, not an ARM issue. My full desktop NixOS config builds for x86_64-linux and aarch64-linux. It even about 90% cross compiles, possibly just one "external" package that isn't setup right for it. And actually that might even be fixed, I just saw a cross-compilation fix go in today.


I haven't had problems because I just build images myself, using images such as alpine as a base.


One thing that made arm on docker much easier was by using the kubernetes builder for docker. Spin up an arm nodes in kubernetes, create the docker builder pod, and it'll build/push your docker image easy as can be.


I face similar problems running a docker-based NAS on a Raspberry Pi. But I end up just building the official images myself on the Pi (or on my dev machine with qemu) from the open source Dockerfile of the official image.


TFA describes the E3-1230 as an 8 core server when it is actually a 4 core server with 8 threads. That means the ARM vs x86 per-core performance comparisons are off by a factor of 2. I stopped reading when I noticed that. For cheap sustained compute, it's hard to beat a Hetzner auction dedi.


Thanks for pointing out, I've made some updates on blog post to make the description more accurate.


This has always been the case with vCPU. But many didn't know vCPU in many cases means thread and not Core.


Great article, thanks for sharing!

We're using Hetzners new ARM servers ourselves, to convert images to WebP (Yes, your company name is really confusing!) and they perform almost as good as the Hetzner AMD instances.

But since they're so much cheaper, we can easily fire up many of them and use a load-balancer in front, saving a ton of money compared to dedicated servers.


[Yes, your company name is really confusing]

LOL, we're not a company, we are just a small team of three individuals(Nova Kwok,Benny Think and Tuki Deng).

[convert images to WebP]

May I ask your use case on this? (As we've recently launched a product called WebP Cloud might fit this need. (And we're actively seeking seed users.))

WebP Cloud documentation here: https://docs.webp.se/webp-cloud/


Alright, they way it was mentioned in the article made it sound like a business, sorry about that.

Your service looks great, but we long since concluded that using an API for image conversion would be many times more expensive than using our own setup. And we also have mixed in fetches from external sources, storage in S3, Cloudflare workers and generative AI mixed in the bag - no single service supports all that yet (hint).


> they way it was mentioned in the article made it sound like a business, sorry about that

It is a business. They're selling a service. I don't know why they're protesting at the notion of being a company, they're de facto a business (selling service behind a brand, which they're openly promoting to sell more services).


Hmmm, maybe calling this a start-up/business might be more appropriate?

(WebP Cloud Services starts by providing a free service of Gravatar/GitHub Avatar reverse proxy with WebP optimization at first, and now it's our first attempt to make a paid services of private proxy as more of our users want this to be a more generally available service.

(And we are currently not a company indeed) ´・ᴗ・`

No intentional protesting at the notion of being a company, just unsure if "company," "business," and "startup" have the same meaning in certain contexts.


If you're planning to build a business together, forming the company ASAP is a good plan. Recently talked to some founders who split up before they incorporated, and it was a mess.


[If you're planning to build a business together, forming the company ASAP is a good plan.]

Do you have any advice in this regard? We do have a preliminary plan to register a European company in Estonia (through e-Estonia) after achieving good revenue to continue our operations.


You absolutely must do this BEFORE any sort of revenue. It should be the first thing you do.

You need a company to own things, such as the IP (code, trademarks, website, customer lists), as well as being the thing to which revenue is paid. You'll also find you can't do most things without it (such as get a credit card, office lease, cloud discounts, etc).

Most importantly, suppose you have a cofounder break up when you have just started getting "good revenue" but haven't yet got a company. Who's is that revenue? Who owns the code you wrote? A complete mess.

I don't know anything about e-Estonia, but if they allow you to sign up today, no reason not to do that. In the US (or abroad if you want a US company), Stripe Atlas is a good option. That might work for you too.


It's a bad Idea. Stripe and other Banks want proof that you live there at some point, what I heard. And Tax Authorities usually see that as Tax evasion if you don't have "substance" in this country of cooperation.


The moment you started providing a paid service you became a business. The legal status, as in company, independant, or whatever, depends on your local laws.


Ha! Another one :-) !

We created a company that does something similar[^1]. The tech was great and the company is profitable, but the market is really, really tough, with incumbents (read: existent CDNs) playing all sort of "standard business practices"[^2] to keep customers in their more expensive business. And yes, in this line of business you really want the cheapest hardware.

[^1]: Support for transcoding images to WebP, AVIF, JpegXL, and selecting on the flight the best format for serving individual images in a website. Company (ShimmerCat AB, a Swedish registered company) is currently for sale; contact the CEO if you want a bargain[^3], last time I heard ask price was X0 000 USD, with X less than 9. I'm not part of the company in any capacity any longer.

[^2]: Read: standard dirty tricks to suppress the competition.

[^3]: Who is the CEO is public in the Swedish registry of companies.


Since moving to Apple Silicon, I've been wanting more ARM options in the cloud. Although it is possible to host x86_64 VMs, having fewer differences is obviously better.

I've been using Oracle's free tier for a while, and it's been OK. Performance-wise, my Objective-S and libµhttpd based web-server appears to be doing around 1800 requests per second, and held up fine to a HN hug of death.

Hetzner was far, far easier to set up, both from their console and via the API. Performance was comparable.


AWS has great support for arm64 instances


I’ve been migrating workloads away from x86 and towards ARM on AWS and GCP since they’ve been available. This review does a great job of kinda giving you an idea of what you are gonna get as a platform, but if you are interested I strongly recommend the experience on any cloud provider.

While there was some work to benchmark and validate, the cost savings have been non-trivial. Plus this change happened as we were all switching to the M series Macs so ironically now our entire chain end to end is off x86.


For us it was driven in the other direction. With the introduction of the M1s we knew that we’d be on arm locally soon enough. There was a bit of work in the transition but things have improved since then. Definitely happy running on all arm now though.


I just refer to ARM Lambda runners as a free 20% discount since it makes absolutely no difference in runtime but costs less.

I'd also run ARM database instances but I think those are still not really that readily available.


Alas all my stuff is in Azure, and I'm still waiting for them to offer smaller VM sizes comparable to their existing B line. I currently use a B1s (1 CPU 1GiB) that comes to ~$5/mo while the cheapest ARM VM would be ~$25/mo (2 CPU 4GiB).


I was keen on migrating to ARM, but there seems to be no benefits from doing so on GCP; I'm open to be wrong here.

From what I understand they're using Ampere Altra, which have single thread performance similar to Skylake; but the cost is equivelant or worse than the x86 e2 series.

e2-standard-4: USD 97.84/mo

t2a-standard-4: USD 112.42/mo

(sustained use discounts apply to neither).

EDIT: I see you're in Denmark and are operations focused. I am too operations focused and just across the bridge in Malmö, maybe we could hang out.


Yeah sorry I should have been more clear. Currently the ARM instances in GCP when you use them as spot basically never get interrupted. We’re big into GKE so use them as a preferred node group for interruptible pods. I assume due to the pricing you mentioned usage is very low.

So basically any background jobs or big batch processing jobs that required a lot of CPU time. We have multi-arch container builds so if we can’t scale out the ARM node group not a problem, go back to x86. But it was worth the optimizing to get effectively always available spot instances.

Yeah always open to meet up with folks. I’m on mastodon at matdevdug@c.im.


The real hidden gems of GCP are the 90% off spot instances in a lot of regions for e.g. N2D.

ARM makes 0 sense on GCP if you can use those.


T2A vCPUs are full cores though right? While E2 and most other instances are hyperthreads.


Actually I’m in Sweden. Of course we could hang out in sometime. Just cross the bridge. Here’s my email emeries-atolls.0w@icloud.com


I like the article, but I wish there had been an "Abstract" or "Executive Summary" at the top so that I'd be spared having to read the entire article to find out the results. I'd like to have seen something along the lines of the following:

"We found Hetzner's ARM64 offering, specifically the CAX21 with 4 cores, 8GB at $8.40/month, to be a performant and cost-effective alternative to x86_64-based solutions."


Also add, based on tests arm performed 8% worse than amd64, but this is offset by the 14% savings.


Also, notably, the team who did the benchmarking were impressed enough to have actually switched entirely to said CAX instances for their app.


Good idea! I've added some TL;DRs at the beginning of the article.


According to Oracle's documentation their Arm servers are not virtualized cores but instead actual on-core tenancy, referred to as OCPU instead of conventional vCPU.

https://blogs.oracle.com/cloud-infrastructure/post/vcpu-and-...


Interesting read. I'd like to know more about alpine problems (even just to confirm my bias against it, unless space savings are the most important thing).

For me, Hetzner is mostly baremetal provider. They have dedicated RX line, and if you have base load, a couple of those could run it all (use hetzner cloud instances for scalling and failover)


[I'd like to know more about alpine problems]

Sure, and we're planning to share another post later on the whole procedure of our migration from AMD64 to ARM64, and in that post we'll include more details about Clickhouse's problem if we can definitively establish that the problem was caused by alpine. (After this incident I personally have bias against alpine images too

Comparing alpine and non-alpine images on DockerHub:

https://hub.docker.com/layers/clickhouse/clickhouse-server/2...

https://hub.docker.com/layers/clickhouse/clickhouse-server/2...

There is just ~66MB(255MB vs 321MB) of size difference, my personal advice after this to to avoid alpine images in production as much as possible :P


This is a great article and it’s nice to see we’ve lots of alternatives to run ARM servers.

I ran the now defunct Scaleway ARM server mentioned in the article for several years. For €2,99 it was a surprisingly useful machine. I ran several projects (.net core) on it and it was quite good for those simple workloads. I looked for alternatives for a while but nothing turned up until Apple restarted the ARM revolution with M1.


I've been using a cax41 (16 cores) instance for numerical computations recently. Geekbench scores are 774/10221, costs $0.04 hourly ($27 monthly). Perfectly stable. No throttling (probably not that popular yet hehe). For my specific program it's 10% slower than my laptop's 11980HK processor (8 threads, 16 hyperthreads).


I’m always so taken aback when I compare VM prices from Hetzner/OVH and AWS/GCP.

Similarly sized machine in AWS seems to be around $300 monthly, that’s 10x cost.


Amazon/Google has fallbacks across regions, several layers of data storage redundancy, high-speed. highly configurable software based networking and so much more.

Hetzner/OVH has machines with almost no failover, with no extra availability zones, with no backup guarantees, very little in the way of custom networking, and doesn't integrate with dev tools quite as much.

They're different products. For most people, going Amazon/Google makes no sense. However, if you absolutely MUST keep your data available after or during a fire [0] and keep your systems running during datacenter downtime, you're better off with AWS/GCP/Azure. SLAs with many nines can't afford cheap servers, and that's where the big cloud providers make a lot of money.

Up until recently I saw a lot of people and companies move back from the cloud to self-managed dedicated hardware in data centers. All most companies need is half a rack in two places and a competent sysadmin team, but externalizing the risks is often attractive because disasters and bad failovers do happen sometimes.

[0]: https://www.datacenterdynamics.com/en/opinions/ovhclouds-dat...


Absolutely no arguing that AWS adds more value.

Another thing about AWS/GCP, they are also good at locking you in. For example you want to shift some workloads to Hetzner while leaving others in AWS, you will get a bill for egress out of AWS.


>> many nines can't afford cheap servers

I wouldn't say Ampere Altras are cheaper or worse servers than AWS's Gravitons. And many nines is a fiction anyway. For example, Google Maps had two prolonged downtimes in 2022.


> Hetzner CAX11, with a virtualized ARM64 processor, 42 cores, 4GB memory, priced at $4.91 USD per month, referred to as CAX11 for simplicity.

Haha I wish it was a 42 core for $4.91

Small typo for them to fix.


Thanks for pointing out, now fixed!


Wow, this is timely. I just bought their cheapest one last night (about $4/mo) to play with and performance test it for ASP.Net Core, vs. their x86 boxes.

I tried to be ultra cheap and not buy a v4 IP but it appears Microsoft doesn't have v6 IPs on all their download servers which is causing me pain.


I have been developing on ARM servers for a while. I use Raspberry Pis and Tinkerboards as dev and staging servers and push releases to an x86-64 server on digital ocean. With docker it has been pretty easy, docker-compose usually finds the right packages for the CPU and it works quite well. I am curious about maybe trying on of the ARM servers on Hertzner and see how it compares.


I've been trying their Arm servers for a while and I've noticed some differences in the colors in htop for Debian 12, as if there were a slight difference between the x86_64 and the aarch64 image. Other than that everything's going fine and I'm planning to use Arm for every server in the Falkenstein datacenter (the only one with Arm dedicated and cloud servers for now)


Nice, how’s the performance? The cheapest digitalocean x86 single core cpu is a lot faster than a quad core pi or tinkerbord. I know it’s not the same as an arm server cpu but how much difference is there?


I've searched for Geekbench result: https://browser.geekbench.com/v6/cpu/1584694, it says DO-Premium-Intel 1 Processor, 1 Core, so I'm assuming it's the 7USD/mo plan from https://www.digitalocean.com/pricing/droplets#basic-droplets.

The score on Geekbench is Single-Core: 838,Multi-Core: 842.

While in our tests the cheapest ARM64 plan on Hetzner is CAX11 2Core, 4G RAM, about 5USD/mo, the Geekbench result is Single-Core: 1072,Multi-Core: 1921, so assuming it about 20% faster than DigitalOcean.

We've done the same test on Rpi4B too:

Processor : Cortex-A72

CPU cores : 4 @ 1500.0000 MHz

Score is Single-Core: 247,Multi-Core: 387

For your reference.


Nice, thanks for the reply!


Weird using a E3-1230 v3 in 2023, it's over 10 years old. A similar modern low end CPU would be many times faster.


I guess they used whatever CPU gives a relative price parity with the VMs: https://www.hetzner.com/sb?country=de


Depends on the task/load.

Modern low end wouldn't be many times faster, at least not with a lower core/thread count

https://ark.intel.com/content/www/us/en/ark/products/75054/i...


I think a modern low end Xeon (desktop socket) is something like this. https://ark.intel.com/content/www/us/en/ark/products/212263/...

That's 6 cores, so not quite fair, but w1350 also has a much higher boost frequency, bigger caches, more bandwidth to ram, and it's built on smaller lithography and several generations of core designs later. It's hard to find a comparison between rocket lake and haswell, but between all the differences in lowend xeon parts, you're probably seeing a significant increase in throughput if your load isn't bottlenecked on something else, but even then, 20 pci-e 4.0 lanes vs 16 pci-e 3.0 lanes is more than double the i/o capacity.


> you're probably seeing a significant increase

As someone who had a chance to see the difference by it's own eyes: yes, it's faster, especially when the memory is the bottleneck. But it's not times faster in everyday tasks. Aside from the synthetic tests you are usually see more performance improvement from the overall system being faster (ie SATA to NVMe, more faster RAM) than just by CPU alone.

There are some apps that are CPU bound (GHz first, RAM BW second) which gladly run way faster on these E3-16xx CPUs, than contemporary E5 multisocket monsters with tons of RAM... but waaay less GHz. These apps would be better on W1350, no questions.


Their RX-220 servers are also amazing.

Ampere 80 core machines for $220/m.

We use these for anything requiring a lot of threads.


Great no nonsense article!

I'm surprised how bad xeon scales to 8 cores. But isn't the xeon instance the only one not running bare metal?? Maybe he is paying for 8 cores but gets only 2-4 physical cores?


That "Xeon" is a very old (10-year old) quadruple-core (8-thread, i.e. 8 "virtual CPUs") desktop Haswell CPU rebranded as "Xeon". A current Intel NUC Pro with a Core i3 CPU would be a much faster (67% faster ST, 43% faster MT) dedicated server than this one and it would cost to own less than $500 with DRAM and SSD, so about $8 per month for a 5-year lifetime (so the performance per $ would be at least 5 to 6 times higher than that of the compared Intel server).

That "Xeon" is a good comparison point only because it was available for them in the same price range, not because it would be representative for the performance of any modern x86 CPUs. Also the "Epyc" is probably a very old model.

Somebody who wants to spend their money for cloud services as efficiently as possible should better ensure that it is possible to migrate back and forth their applications between x86 and ARM instances, because which one is cheaper for a certain performance at a given time depends a lot on non-technical reasons, so it is unpredictable which will be cheaper a few months later.


@kramerger No, Xeon Server is a dedicated server(a.k.a Bare metal), I've looked at it's console and found it's Dell PowerEdge R220(Motherboard Dell Inc. 081N4V).

I'm quite confused about it's performance as well.

CPU Info Name Intel Xeon E3-1230 v3 Topology 1 Processor, 4 Cores, 8 Threads

Geekbench Link is at: https://browser.geekbench.com/v6/cpu/1533259



Your link is wrong, it points to an E3-1230 (v1) from 2011 (Sandy Bridge), while the tested CPU was E3-1230 v3 from 2013 (Haswell).

Decoding the Intel product names requires experience, because one or two letters or digits added or deleted can change very much the characteristics of the product. Two such products differing in one letter might have a five times difference in performance.

Not that it matters much, because even an only 10-year old CPU is still ancient.


I’ve been using Hetzner’s EX line for some years, it’s super cost effective, until now I still can’t find any other provider with cheaper offerings.


That's interesting. I feel like I see benchmarks almost always showing ARM outperforming for all kinds of specific workloads. This is the first one I can recall showing it's not as good performance-wise, however when you add the power efficiency, cost savings, it winds up being better overall.


In terms of software hiccups, for someone with little time to debug, is it worth the cost savings?


If you're not using proprietary software but common programming languages and OSS tooling there should be no difference.


At least two links don't work because they contain a closing parentheses.


Thanks for pointing out! Now fixed.


CAX11 looks like a great deal, especially with IPv4 disabled.


How do you get your account verified at hetzner without sending a government ID to them?


Pay 20€ up-front through PayPal. This becomes available as credit, a top-up in a sense.


They have disabled my account, i can't even login anymore.


Ah sucks. From what I hear their support will probably not help you register but it's worth a shot.


Get a new account.


I've sent my passport image to them to get my account verified.(When the second time I register Hetzner)

(My first attempt on registration got my account closed even I've provided by passport.(maybe it's because I've used VPN for registration as it's website is too slow to open in China(might caused by china GFW)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: