<obligatory-rant> I'll never understand why serving static(!) sites is so hard. Are modern blog systems still that bad? HN traffic is far below 100 req/sec (perhaps below 10 req/sec), which should be an absolute no-brainer for any modern webserver. [1] Heck, given a good internet connection, one should be able to run 10 such blogs on a Raspberry Pi and still survive HN. </obligatory-rant>
> I'll never understand why serving static(!) sites is so hard.
It's not hard. They're all adding stupid bloat for no good reason.
When my blog was on the frontpage, traffic peaked at 1 Mbit/s (that's megabits, not megabytes) and CPU load peaked at 5% of a single core (and only because that box runs a dozen services in parallel).
Everyone who's blog cannot withstand the HN crowd deserves to have their computer operator's license revoked.
A couple of years ago I had a couple of articles on a static blog hit HN.
Out of the box nginx on Ubuntu 14.04 on the 1GB Linode (then the second tier) handled it perfectly fine. With no disruption to a teamspeak server that was on the same host at the time.
According to support, my site just got DDoS-ed. Makes me wonder who can get THAT pissed off with an article about Debian. Anyway, it makes me feel like I'm doing something right. :D
(BTW you're right, it's a cheap host and quite slow even when it's working. Site runs on Grav (flat-file CMS), which is a lot faster than e.g. WP and ilk (on the same chep host))
Only if you have a well configured cache in front of your cms. If you don't expect any significant traffic, it's perfectly reasonable to serve directly from your cms. And that might be super slow because optimizing a cms for perfomance does not make sense. That should be the job of a cache in front of the cms.
On the other hand, why bother writing a blog if you don't expect any readers?
There's this hidden assumption here that "the HN crowd" = "a huge amount of visitors". The typical HN crowd for a front-page article will be on the order of 10,000 visitors. That's really not much on the scale of the internet.
It might be true for self-written blog software, but amusingly those tend to have quite good performance ... either because these are static generators, or because these are rendered on the fly, but with very simple and hence fast templates.
However, there is no such excuse for popular CMS. Those have been developed over 10+ years, and have a wide range of users - small as well as large blogs.
Finally, I wouldn't call static HTML pages "premature optimization", but rather "the natural thing to do". Let's have a look at the data access pattern: On average, articles are written once, updated seldomly, and read at least 10x as often as written. With increasing popularity, this ratio shifts even more to the "read" direction. [1] Since the datasets are small (order of KiB or MiB), complete regeneration is feasible. Moreover, it is much simpler and less error-prone than caching. And you can speed up site generation with classic build tools (make, tup, etc.), if you want.
[1] That is, with increased popularity more articles are written due to the increased motivation, but disproportionally more readers will arrive.
Google Cache: https://webcache.googleusercontent.com/search?q=cache:nHJYRo...
<obligatory-rant> I'll never understand why serving static(!) sites is so hard. Are modern blog systems still that bad? HN traffic is far below 100 req/sec (perhaps below 10 req/sec), which should be an absolute no-brainer for any modern webserver. [1] Heck, given a good internet connection, one should be able to run 10 such blogs on a Raspberry Pi and still survive HN. </obligatory-rant>
[1] According to ServerFault, challenges start at 100000 req/sec: https://serverfault.com/q/408546/175421