This makes me tired. I know it's supposed to be humorous self deprecation, but it's soul crushing to see the pseudo real-time thought process behind the fantastically over-engineered setups from my day jobs. All for someone's humble blog?
Obligatory HN footnote: My blog costs $6 a month to serve HTML from digital ocean. Landing in the top five links a few times on HN didn't make the Linux load blip much past 0.20. GoAccess analyzes nginx traffic logs for free, if you want to know what countries are scraping your pages.
A lot of places I've worked at never gave me the chance nor opportunity to use all the fanciful technologies we read about so often. Building your own blog was often the only outlet to explore them.
Cloudflare pages is better overall because it’s trivially easy to integrate with DNS for your custom domain/cloudflare workers, and handles staged changes better IMO. You can point it at a GitHub repo so unless you have a complex build it’s easy to setup.
Unfortunately IME it’s not a super well-polished product though (I can’t for the life of me get their CLI “wrangler” to login to a headless machine, and their HTTP APIs are not documented well enough to use for non-git file sources, so I can’t get it to work in my not-so-special dev environment setup). So it’s only better if you can get it to work, although that’s something you’ll probably figure out in the first 5-10m of using it.
But cloudflare has a growing monopoly on internet traffic that is worse for the internet than privacy busting laws that are passed. If you are a technologist worried about the distributed nature of the web, you should avoid it.
GitHub Pages is pretty bad for static content with its universal
Cache-Control: max-age=600
that can’t be changed. Your assets should have much longer expiry and hopefully be immutable. Just get a server, it’s cheap and you can do proper cache control and you’re not beholden to your Microsoft overlord.
With long expiry/immutable assets, only the HTML needs to be refetched from the server on refreshes or subsequent visits, instead of everything after merely ten minutes. On slow and/or high latency networks the difference can be huge. And you don’t even need to intentionally refresh — mobile browsers have been evicting background tabs since the dawn of time, and Chrome brought this behavior to desktop a while ago to save RAM (on by default).
By refetch I mean re-requested, which can return 304 responses. You still have to do a roundtrip for each resource in that case, and many websites (including static ones) have this waterfall of requests where html includes scripts and scripts include other scripts, especially now that some geniuses are pushing adoption of native esm imports instead of bundling. The roundtrips add up, and good luck if your link is unreliable in addition to being high latency. Compare that to proper caching where a refresh doesn’t request anything except maybe the html. I have experienced the web on such a link and it’s a shitshow.
Ok. I don't think this is a big deal for the vast majority of blogs, like mine, that are hosted on GH pages. It's just HTML and some photos that are unique per post. But I also don't see why GH would put the number so low.
Because they have this one max-time for everything, from things that should be refetched frequently (html, unversioned scripts and stylesheets, etc.) to things that should be immutable (versioned scripts and stylesheets, images, etc.). They don’t understand your website. You do, and you can set exactly the right headers for the best user experience. Btw you can set the right headers with Netlify and Vercel as well.
I'm not confident a "cheap server" (like a $5/mo DO droplet) would be able to withstand being on the front page of HN, but I am pretty confident a GH pages page could withstand being on the front page of HN.
I had a blog article of mine on the HN front page a few years ago and the nginx serving static pages from an ultra cheap VPS didn't even break a sweat.
Or cloudflare pages. As far as I can tell static content is served at no cost and dynamic requests have very generous free limits (something like 100k requests/day)
i love gp. this one almost didn't fit on it because I wanted to serve a small db file to the client rather pay for remote. Luckily I was able to keep it under their pretty generous file limit.
The main downside of GitHub pages is that they don't support running your own Jekyll plugins from _plugins; sometimes it's just a lot easier to write a bit of Ruby code. That said, you can just generate stuff locally and push the result, but that's the main reason I've been using Netlify.
You mean run Jekyll and deploy "manually"? That should work, yeah; didn't think of that actually. But the standard "GitHub Pages" deploy won't work with custom Ruby.
The difficulty with the "sold your soul" meme is that preserving your soul is a moving target. I've got some Oracle free tier instances. They get deployed with nixos-rebuild, same as anything else. The main difference between them and any other virtual server provider is when I've got to do something that requires logging in to the overwrought web interface, it's slightly less friendly than other providers (the IP config is a bit weird, too).
Using an offering from a specific company is not selling your soul. Selling your soul entails adopting something in a way that you become reliant upon it, giving whomever controls it leverage over you. The chief one these days is using Proprietary Software 2.0, and especially writing significant code that ends up inextricably wed to it. That can include the Oracle Cloud API, but it also includes every other lock-in-hopeful proprietary service API, including all of these "easy" and "free tier" offerings from not-yet-openly-associated-with-evil SaaS "startups".
So in short if you're choosing between some proprietary solution that offers "free" hosting (eg Heroku, Github pages, anything "serverless", etc) and Oracle free tier that gives you bog standard VMs on which you can run common libre software, choose the Oracle free tier route and don't think twice. If Oracle engages in "altering the deal", then the most you'll be on the hook for is $5/mo at a different provider rather than having to completely redo your setup.
Oracle cloud is suspiciously good. They also claim not to do the AWS thing: if you exceed the free limits, they'll just shut you down rather than bill you absurd amounts of money. I guess that's reserved for the Java and DB billing divisions.
Their free tier gives you quite a lot of disk. The catch is being capped at 10Mbit, which can be mitigated by .. Cloudflare!
Tangential, is there a single provider which does (Python) app platform (web, cron, workers) and hosted Postgres plan costing 10 usd a month? A VPS still seems most compelling option for me.
I did that for years but have recently switched to Cloudflare Pages. Cost are negligible either way, but Cloudflare auto publishing straight from a GitHub webhook out of my repo is slightly fewer components.
I think humans are tinkerers. Given a choice between utilitarian productivity and tinkering, unless it's a life or a death situation, people will go ham on the tinkering. Especially for such low risk things as one's personal blogs.
Now what is maybe a bit strange is companies like Vercel having massive valuations because of this. I said in another comment somewhere does anyone actually use them beyond the free or low cost tiers?
serving static files via nginx is easy on the compute. I'm serving something a tiny bit more complex (instructions at http://funky.nondeterministic.computer) and the $5 DO droplet couldn't keep up. I had to upgrade to a $12/mo server to keep up.
I’m very impressed that Vercel is able to sell so little for so much. They do the very bare bones hosting and charge a fortune to run everyone’s inefficient JavaScript framework of the month to replicate the speed and simplicity of a static site. Amazing.
They own React at this point, it seems. More and more hires I'm coming across know Next.js rather than React itself, and Vercel is now a massive part of the core React contributor team...
I had a personal project that was slightly more complex than something like a digital form and I wasn't even able to run it for free (I have zero users, why would I pay?)
At least the Heroku free tier could run all my apps. RIP
This been my experience eon Netlify, but not with Vercel. The biggest bottleneck is often the limit of 12 serverless functions per site (technically the limit is dependent on what framework you use which is even more frustrsting).
The function limit is particularly frustrstinb when you need route splitting to avoid slow cold starts or memory limits. I even hit this in a few Astro projects which was particularly suprising - when serverless rendering was an all or nothing option for Astro Vercel was effectively useless on Hobby plans.
The limit of 12 functions is only if you are deploying an API-only project without bundling[1]. The majority of the modern frameworks support bundling, so you can write many, many more APIs (100s+) which compile down to a handful of functions.
This bundling also means fewer cold starts. Bundling is the default for Astro[2]. Also worth noting, on paid plans, functions are kept warm automatically[3].
Thanks Lee. That makes total sense when using SvelteKit or NextJS on Vercel, when Vercel owns the build step, bundling, and infrastructure you really have a great chance to optimize everything.
Its a bit of a crap shoot with third party frameworks though. With Astro, unless I'm misremembering the timing, they defaulted to bundling per route originally and only changed that when Vercel users ran into issues with the Hobby plan. More interestingly on the timing, I think that was right around the time Vercel took over as Astro's official hosting sponsor. Not sure how much a part that played in the change in defaults.
In general, I'm always hesitant with a build system that I depend on to route split in a way that impacts my actual cost to run. At the end of the day I have little say in how routes are split and little insight into what metrics are used at bundle time to make those decisions. That said, I haven't heard any horror stories with SvelteKit or NextJS on Vercel so the concern may very well be unfounded as long as I stay in the Vercel ecosystem.
1: Vercel is running millions of personal Next.js static sites for free.
2: Inefficient in what sense? In my experience, most of the latest software startups are shipping incredibly quick with Next.js / Vercel stack infra. TS/JS is still a much faster runtime (and only one with types) than the practical alternatives of Python, Ruby, and PHP. There is a single digit percentage shipping new startups in Java/C#. Go could make a decent case.
3: IMO the Next.js / Vercel deployment experience is far far better than what I dealt with wrangling Django templates / non-template integration / deploying anywhere else.
> I live on the edge, the edge of the network, the browser, the bleeding edge. Everything must be serverless, multi-region, edge delivered, eventually consistent, strongly typed, ACID compliant, point in time recovery, buzzword buzzword, and buzzword bazzword.
I also did the same, built my own analytics with TinyBird for one of my projects (https://linkycal.com). It ended up costing less than paying for a hosting provider
> I am open to ideas on why this happens but my guess is because bun isn't written in rust.
LOL classic. I love Rust and I enjoy when people take the piss out of us fans.
I do use SQLite every now and then but I'm always surprised by how low-latency and high-throughput it is. I have bad intuition for how efficient it is. Good stuff!
I quite liked the blog. Minus all the bleeding edge stuff, I build an analytics website for me a few months ago and it was quite fun. Later extended it to included some real time insights on the performance of my sites.
Pretty sure it's "Squeeh" and "Gypity" giving it those vibes (I know he's the one that led to me calling it "Gypity" and he always pronounces SQL "Squeal). Solid bet that the author is a consumer of Primeagen content.
Obligatory HN footnote: My blog costs $6 a month to serve HTML from digital ocean. Landing in the top five links a few times on HN didn't make the Linux load blip much past 0.20. GoAccess analyzes nginx traffic logs for free, if you want to know what countries are scraping your pages.