Hacker News new | past | comments | ask | show | jobs | submit login
How web bloat impacts users with slow devices (danluu.com)
932 points by jasondavies 10 months ago | hide | past | favorite | 597 comments



As someone with recent experience using a relatively slow Android phone, it can be absolutely brutal to load some web pages, even ones that only appear to be serving text and images (and a load of trackers/ads presumably). The network is never the bottleneck here.

This problem is compounded by several factors. One is that older/slower phones cannot always use fully-featured browsers such as Firefox for mobile. The app is takes too many resources on its own before even opening up a website. That means turning to a pared-down browser like Firefox Focus, which is ok except for not being able to have extensions. That means no ublock origin, which of course makes the web an even worse experience.

Another issue is that some sites will complain if you are not using a "standard" browser and the site will become unusable for that reason alone.

In these situations, companies frequently try to force an app down your throat instead. And who knows how much space that will take up on a space-limited device or how poorly it will run.

Many companies/sites used to have simplified versions to account for slower devices/connections, but in my experience these are becoming phased out and harder to find. I imagine it's much harder to serve ads and operate a full tracking network to/from every social media company without all the javascript bloat.


> That means no ublock origin

Talk about a catch-22 situation. The modern web is useless without adblocking. Especially when you get forever scrolling pages with random ads stuffed in there.


I use ublock origin, and on literally more than one occasion (insert doofenshmirtz nickel quote) I've found a site that I quite like, think it's awesome to the point I actually write to the people who create it with suggestions, and then for whatever reason happen to load it without blockers and discover it's halfway useless with all the ads on it.

I fully support people being able to make some money off the useful things they build on the internet, whether it's some random person who built a thing, or the New York Times or even FB or Google, but there has to be a better local maximum than online advertising.


    > (insert doofenshmirtz nickel quote)
I had to Google for it: IMDB.com has the quote:

    > Dr. Heinz Doofenshmirtz : Wow, if I had a nickel for every time I was doomed by a puppet, I'd have two nickels - which isn't a lot, but it's weird that it happened twice.
<hat tip>


I thought about including the whole thing, but then I decided that doofenshmirtz being as google-able as it is, anyone who was curious could easily find it.


lowering the barrier to entry as to no overtax curiosity seems sensible in this crowd


I just choose not to use it. if I follow a link and there is an ad per paragraph and video starts playing I close the tab. it's rare the page I was about to look at was actually important


As a web developing illiterate, I wonder how hard would be writing a browser extension that loads a page, does infinite scroll in memory and in background, then while it is still loading the infinite stuff, splits the content in pages and shows them instead, so that the user can go back and forth to page numbers. This wouldn't reduce the network and system load, however navigating the results would be much more friendly.


Problem is, "infinite scroll" often is infinite, meaning it will load an ass load of data in the background and take up a ton of memory, and the user may never even end up looking at that data.

I really hate the load on scroll (especially Google Drive's implementation which is absolute trash, and half the time I'll scroll too fast and it will just miss a bunch of files and I'll have to refresh the page and try again), but a better hack might be an extension that scrolls a page or two ahead for you and stores that in memory. If it was smart enough to infinitely scroll websites that are actually finite (like google drive) that would be amazing though.


In these situations what’s eating up your resources usually isn’t the data being represented but instead the representation.

This is why native apps use recycler views for not just infinite scroll, but anything that can display more rows/columns/items/etc than can fit on screen at once. Recycler views only create just enough cells to fill the screen even if you have tens of thousands of items to represent, and when you scroll they reuse these cells to display the currently relevant segment of data. When used correctly by developers, these are very lightweight and allow 60FPS scrolling of very large lists even on very weak devices.

These are possible to implement in JavaScript in browsers, but implementation quality varies a lot and many web devs just never bother. This is why I think HTML should gain a native recycler widget of its own, because the engineers working on Blink, Gecko, and WebKit are in much better positions to write high quality optimized implementations, plus even if web devs don’t use it directly, many frameworks will.


There was a proposal for a browser-native virtual scroller: https://wicg.github.io/virtual-scroller/

Apparently it was abandoned (for now?) in favor of content-visibility / CSS containment primitives: https://web.dev/articles/content-visibility


I find this idea interesting ‘These are possible in JavaScript in browsers, but implementation quality varies a lot and many web devs just never bother.’

Do you have any examples that you consider good implementations? I ask because tables seem to be the biggest offenders of slow components in say Angular / PrimeNG. I am going to a legacy app soon that is being updated (Angular but not PrimeNG). Would like to see if we can build a feature rich table that is more performant than the PrimeNG one that I know looks amazing but is the cause of many headaches.

NOTE: its not Angular or PrimeNG specifically that make the tables slow/hogs, but the amount of DOM elements inside and some of the implementation details that I disagree with (functions that are called withing the HTML being evaluated every tick). Would be great to see if this idea of a ‘recycler widget’ can help us. Cheers.


We do this at Fastmail and, if I say so myself, our implementation is pretty damn good. We’ve had this for over a decade, so it was originally built for much lower powered devices.


> its not Angular or PrimeNG specifically that make the tables slow/hogs, but the amount of DOM elements inside

Yep, this happens even with nothing but a 10-line vanilla JS file that adds N more items every time the user scrolls to the bottom of the page. Performance degradation increases with every load due to the growing number of DOM elements which eventually exceeds whatever margin is afforded by the machine the browser is running on, causing chug.

Web is not my specialty so I don’t have specific recommendations, but plenty of results turn up when searching for e.g. “angular recycler” or “react recycler”.


These suck donkey balls because they break searching for keywords on the page.


Which is another reason why an HTML-native recycler is desirable. The browser could be easily avoid this is issue if it were the one to implement the recycler, rather than it being a third-party bolt-on.


> Problem is, "infinite scroll" often is infinite, meaning it will load an ass load of data in the background and take up a ton of memory, and the user may never even end up looking at that data.

It's also an infinitely worse user experience and prevents you from holding your place in whatever is being scrolled. Are there advantages? Why is infinite scroll used in any context?


Personally I prefer infinite scroll, versus the alternative of finding the "next page" button at the bottom, waiting for the content to load (preloading could help here) and sometimes navigating to the beginning of actual beginning of the content I was viewing. I even used a browser extension that matched "next" buttons from pages and loaded the next page content automatically, but the extension (can't recall its name) is not available anymore.

Granted there are some downsides, such as having the browser keep extra-long pages in its memory, but overall I prefer working infinite scroll mechanisms over paged ones. As far as I see, the ability to remember the current location in the page could be easily implemented by modifying page anchor and parameters accordingly, though personally I've rarely needed it.

Perhaps if there was a standard way (so in the HTML spec) to implement infinite scrolling, it would work correctly in all cases and possibly even allow user to select a paged variant according to their preference.

Not all the paged views work correctly either. In particular systems that show threaded discussions can behave strangely when you select the next page. Worst offender is Slashdot.


Infinite scrolling, at least in most implementations, makes it almost impossible to find historic content. This could be useful e.g. to journalists and researchers, but even as a private individual I would sometimes love to be able to see what person XY posted in, say, 2019.

In other words, infinite scrolling works in "pure consumption" contexts. You'd never want to add infinite scrolling to some backoffice admin interface.


1 batch of content = 1 batch of add space = more money.

Each next page click is a moment for you to reflect and notice the waste of time. Simple as that.


You don’t actually need to load everything, just the previous, current, and next pages.


It'll give a nicer experience and will eliminate situation where an element changes location just as you try to tap on it.

The extension just needs to handle GDPR notice and Email subscription overlays.


Even if you do use a standard browser, companies will force you to use an app by making there website broken (on purpose?).

Random recent example: Nike. Popping useless errors upon checkout in the webshop. Support: "oh, we're so sorry, just try the app, k bye".

Another example of major companies with broken websites more often than not: (European) airline booking websites.

And major companies think this is totally fine and doesn't damage their brand? I mean not being able to create a functioning website with unlimited funds in 2024 is not a bad look?!


I can show some forgiveness to airlines, because they simply outsource it to some agency somewhere.

But I have zero sympathy for giants, like Slack. If I do a "Request the Desktop Site", then it suddenly works(albeit with lot of scrolling) on my Firefox(iOS), but if I disable the "Request the Desktop Site", then it blocks everything and forces me to download the app from AppStore.

Sadly, the downloaded app looks like an optimized mobile version of the site.


That seems backwards to me. Airlines are far more important than Slack, which is why they are regulated and Slack is not. They deserve no forgiveness for denying you service if you use an ad blocker, making you jump through ridiculous hoops to get a good recaptcha score, rejecting transactions so you have to re-buy a at higher price, or any other kind of easily fixable accessibility offence.


Reddit is another example where they've broken the mobile browser experience, to send you to another app. Arguably broken, but in different ways.


LinkedIn, leaders in deceptive design (though given recent HN on internal situation, a more favorable interpretation may be that they can't handle their own bloat and it shows).


> LinkedIn, leaders in deceptive design

Oh, absolutely. One example: You want to connect with someone, and it'll throw up a dialog box on whether you want to add a note, with two buttons "Add Note", "Send". If you now decide that you don't want to connect after all, or want to look up something before composing the note, and close the dialog instead - it still sends the connection request.


[flagged]


But then you get the desktop version of the site. Never mind that Reddit has a mobile-friendly version (whose design Reddit has kept on bungling too).


The desktop version is still much more usable on mobile than the "mobile-friendly" version.


The i.reddit mobile site sadly seems to have stopped working. At least for me.


try adding .i to the end of the url.


My response, "why should I trust you with greater access to my phone when you can't make your website work? How do I know you aren't bundling malware? Considering you are obviously using unqualified developers."


> In these situations, companies frequently try to force an app down your throat instead. And who knows how much space that will take up on a space-limited device or how poorly it will run.

And honestly, that app is going to be a browser shell with a (partially) offline copy of the website in it, 9 times out of 10...


> that app is going to be a browser shell with a (partially) offline copy of the website in it, 9 times out of 10

If you're lucky. The main UI may just be a website, but as a native app is has a greater ability to spam you, track you, accidentally introduce security vulnerabilities, etc.


I wrote code to main nokia.com site 10 years ago where it used few ways to detect slow loading of resources and set a flag to disable extra features from the site. This was done because the site had to work in every country and many of the slowest phones sold were from said company.


I also worked for Nokia 13 or so years ago, though not on Nokia.com

Thanks for your work, one of the things that I really liked about Nokia was the passion for performance.

On the flip side: I was on the Meego project and we joked that we had the most expensive clock application ever created, because it kept being completely recreated.


I liked Meego and Maemo, I always felt that they were an expression of the idea that general purpose computing can work in the mobile form factor, which is something that tremendously appeals to me (I wish I still had my N900).


I've got an old MacBook Pro from 2013 that I still keep around because the keyboard is the best Apple ever made. It's not fast by any means, but I haven't encountered any difficulty with websites whatsoever. They're not as snappy as I'd expect on new hardware, but perfectly usable. I do use uBlock Origin, however.

Are these Androids actually less powerful than an 11 year-old, base-spec MacBook?


> Are these Androids actually less powerful than an 11 year-old, base-spec MacBook?

Yes. Definitely. a Macbook Pro from 2013 has between 4-16GB of memory for one thing. The lowest spec phone in the article (Itel P32) has 1GB. A 2013 Macbook Pro has a 4th gen i5 processor. This phone has a MediaTek MT6580. It's not even in the same ballpark.

This is a bit of an extreme example, but the fact is that a very large number of people in many areas of the world use phones like these.


Additionally, weak Android devices are not necessarily old Android devices. New underpowered Android stuff is sold every day. Cheap tablets are particularly bad about this — I have a Lenovo tablet that I bought maybe a year ago which uses a SoC that benches a bit above a 2015 Apple A9.


$50 android phones are still sold in developing countries and they usually have an MT6580 or UMS312 with 720p screen.


I was forced to use an old 2011 (IIRC) MacBook Air in 2020 (my replacement BTO MacBook Pro was stuck in China due to the pandemic), and the OS and local apps were fine and reasonably snappy, but oh boy, the web sucked so bad. Slow slow slow. Couldn't open more than 3 or 4 tabs of "modern" websites (HN was fine, yay). Virtually unusable. I was shocked.


"Another issue is that some sites will complain if you are not using a "standard" browser and the site will become unusable for that reason alone."

This is where the blog post falls short. It assumes the software someone uses to access these websites is a constant, chosen from a small selection of browsers controlled by companies that seek to profit from proliferation of online advertising. The software used is not a variable in this "study".

Most of the sites on the list presented, namely the ones where we are just reading, listening or watching (cf. performing some commercial transaction), can be accessed using software not controlled by so-called "tech" companies on older hardware even with today's bloat and the experience will be faster than if trying to use the bloated software from so-called "tech" companies. (That software comprises both the browser and the gratuitous Javascript software that these browsers will run automatically without any input or choice from the user.)


Can you run all your traffic through a self-hosted pihole to avoid such things?


You're already too rich and too tech aware to qualify as the low end described in the article if you ask that question :)


Maybe so but I also test things in a variety of browsers and devices frequently to try and avoid the problems described in the article.


That's great.

But you just assumed that someone who uses these low end devices has heard of a raspberry pi, can afford one and even has wired internet to plug it into.


I was responding to someone with a recent example of a slower Android phone. I know many Linux and Android users who already have Raspberry Pi’s and similar devices they could use to set up a pihole. People who go down the “fully free” Linux path or even are using more experimental hardware have the same issues.

Pihole can (I assume) also work on WiFi too.


Certainly an option for me. But not a scalable solution for the large number of non-tech people with older devices.


Wasn’t there an old browser that would render the page on the server and just send down the result or something like that?


The old Opera for mobile did that. I think Chrome had something similar at one point.


Opera!


I’d love something like it for all my older devices where I can set it and forget it.


NextDNS is pretty good for this - just change the DNS in your network settings.


You'll also need to add bundles to block dns names ( free fyi)


Having a decent internet experience shouldn't require going through your own self-hosted server.


Absolutely not but then I never thought I’d need a 20,000 entry hosts file either.


Dan's point about being aware of the different levels of inequality in the world is something I strongly agree with, but that should also include the middle-income countries, especially in Latin America and Southeast Asia. For example, a user with a data plan with a monthly limit in the single-digit GBs, and a RAM/CPU profile resembling a decade-old US flagship. That's good enough to use Discourse at all, but the experience will probably be on the unpleasantly slow side. I believe it's primarily this category of user that accounts for Dan's observation that incremental improvements in CPU/RAM/disk measurably improve engagement.

As for users with the lowest-end devices like the Itel P32, Dan's chart seems to prove that no amount of incremental optimization would benefit them. The only thing that might is a wholesale different client architecture that sacrifices features and polish to provide the slimmest code possible. That is, an alternate "lite/basic" mode. Unfortunately, this style of approach has rarely proved successful: the empathy problem returns in a different guise, as US-based developers often make the wrong decisions on which features/polish are essential to keep versus discarded for performance reasons.


> That's good enough to use Discourse at all, but the experience will probably be on the unpleasantly slow side. ... an alternate "lite/basic" mode

Why does this need to be the "alternate" choice though? What does current Discourse provide that e.g. PhpBB or the DLang forum do not? (Other than mobile friendly design, which in a sane world shouldn't involve more than a few tweaks to a "responsive" CSS stylesheet).


I like the scroll view in discourse. Makes it super easy to follow a thread. The subthreads and replies are also easier to use. The search is better, the ability to upvote makes it better for some use cases, and in general phpbb is a mess in terms of actually being able to see what's useful and what threads are relevant.

I think flipping the question makes more sense, why do you think some forums switched to or started using discourse instead of just using phpbb? I can guarantee you that it's not just to follow a fad or whatever, most niche or support forums don't care about that.


I do think trendiness and modern feeling uis are requirements for most forums these days from most perspectives.

I say this as someone that frequently uses and enjoys both rue brutalist design of a text web browser and the emacs mastodon client.


Discourse still offers a worse experience than phpBB though, even if you use a fast device.


I was thinking about this when I saw this post earlier today.

Why shouldn't the default be: does this website work in Lynx? I think that's a damn good baseline.

And in response to the other parent post, on a (almost) new iPhone, both news sites & Twitter continuously crash and reload for me. I'm not sure what the state of these other popular sites are because I don't use them.


Voice, video, realtime interaction, a devoted user base, an incredible amount of money…


What do you mean by voice and video? Why would I want to have voice in a forum? I think that would be akin to receiving voice messages in messengers. Or do you mean, that for these kinds of things a widget can be displayed? That certainly is possible in old style forums. It is just HTML, an embed code away.


Discourse, not Discord.


whoopsie. thanks.


> For example, a user with a data plan with a monthly limit in the single-digit GBs

I live in a poor Southeast Asian country.

People with small data plans don't use data from efficient websites, they use wifi which is omnipresent.

30GB of data on a monthly plan is $3.64. Which is about 4-6 hours of minimum wage (minimum wage is lower in agricultural areas).

But more to the point, people don't use data profligately like in the West. Every single cafe, restaurant, supermarket, and mall has free wifi. Most people ask for the wifi password before they ask for the menu.

I've never seen or heard anyone talk about a website using up their data too fast.

It honestly sounds like a made up concern from people who've never actually lived in a developing country.

People here run out of data from watching videos on TikTok, Instagram, and Facebook. Not from website bloat.


> Every single cafe, restaurant, supermarket, and mall has free wifi.

I live in a major city in the Philippines, and free WiFi is becoming more of a rarity nowadays. Not even Starbucks and other big chain restaurants, malls, and cafes offer WiFi anymore because of how widely available data is. They expect you to bring your own data and tether if you want to browse or do some work.

In more rural areas, WiFi is definitely not widely available. On the rare chance it’s even offered, it’s usually “piso WiFi” paid by the minute.


I live in one. You're talking nonsense.

That 30GB plan is more than some people with phones earn here


Thank you for the first hand experience anecdote!

I think one way for first world country citizens to empathise with this is how people behave when on roaming data plans during overseas trips. One does keep to public WiFi as much as possible and keep mobile data usage to a minimum or for emergency purposes.


> I've never seen or heard anyone talk about a website using up their data too fast.

Is this because the website usage doesn't add up or because they don't have the tools to track which sites are using how much data?


I mean not using Data Plan here in Northern Europe was me 11 years ago… and me using it sparingly because video or songs would blow through the Data Plan instantly was me eight years ago.


Yeah, I still remember when I started massively using the cellular Web : at the time it cost 1024€/Go, which I found to be cheap !


eh, idk. This is your anecdotal experience, there are others (like me) who have different ones

>It honestly sounds like a made up concern from people who've never actually lived in a developing country.

I once loaded a site that loaded approx 324mb "Super resolution" image (I knew it was high res, but I thought it was like 30-40 mb at best). Took care of 1/3rd of my monthly data in a single page load.


A useful feature of uBlockOrigin is being able to block all media elements larger than a given amount such as 50KB. Wish I could set it to only do this on mobile networks and let wifi stay unlimited.


"It honestly sounds like a made up concern from people who've never actually lived in a developing country."

You mean, the one developing country you live in.

You are also missing the full spectrum of users. People don't just browse the web for fun. They look for important information like health or finance information, they might not want to do that in a public place or they might not be able to put it off for when they next have wifi.

If you are building an e commerce website it might not matter, but you could be building a news site, or any number of other things.


If all the sites tot more efficient it may also increase longevity of laptops and PCs where unsavvy people might just “need a new computer it is getting slow”.

Also applies to bloatware shipped with computers. To the point where I was offered a $50 “tune up” to a new laptop I purchased recently. Imagine a new car dealer offered you that!


I worked at a now-defunct electronics store (not fry's in this instance) in the early 2000s that offered this "tune-up" - it was to remove the stuff that HP and Dell got paid to pre-install, and to fully update windows and whatever else.

Remove the mcafee nuisance popups and any browser "addons" that were badged/branded. and IIRC we charged more than $50 for that service back then.


For the performance boost it could offer the unsavy user stuck on a HDD, it was probably worth it to many. Gross to be the middleman, but it is what it is.


Another computer shop i worked in charged $90 for virus removal, but we also eventually made it policy to just reformat/reimage the drive and remove all the crap and fully update the OS. Prior to that the policy was "remove viruses, remove crapware, update OS", but we had a few customers that had machines with 30,000 viruses. I forget what the record was, but it was way up there in count. Trying to clean those machines had a marginal failure rate, enough that it was costing the owner money to have us repeatedly clean them without payment.

No one wants to tell a customer that they need to find better adult content sites, and that we won't be cleaning their machines without payment anymore!


"just reformat/reimage the drive and remove all the crap"

And that is not more work?

It was usually the way I did it, too. But this requires checking with the owner what apps are important, saved preferences, where are the important files stored (they never know) etc.


What’s the financial incentive in that? Manufacturers ideally want you to buy a whole new device every year, they don’t want you repairing or extending the life.


Some of these sites are un-fucking-bearable on my gen old iPhone.

And the if I’m in a place with a shitty signal, forget about it, this problem is 10 times worse.

I’m not even talking about the cluttered UI where only a third of the page is visible because of frozen headers and ads, I’m talking about the size of the websites themselves that are built by people who throw shit against the wall until it looks like whatever design document they were given. A website that would have already been bloated had it been built correctly that then becomes unusable on a slow internet connection, forget slow hardware.

All that is to say, I can’t imagine what it must be like to use the internet under the circumstances in which you described.

I can only hope these people use localized sites built for their bandwidth and devices and don’t have to interact with the bloated crap we deal with.


I really wish all software developers had to have 10 year old phones and computers and a slow 3G connection as their daily drivers. It might at the very least give them some empathy about how hard it is to use their software on an underspec machine.


> For example, a user with a data plan with a monthly limit in the single-digit GBs, and a RAM/CPU profile resembling a decade-old US flagship

I’m in Canada and have a single digit plan and I just upgraded from an almost decade old flagship. Most websites are torture.


I'm in Canada and have a triple-digit plan, in MBs. It's for emergency use only. It would be nice if something as simple as checking on power outages didn't chew up a good portion of the data plan.


I had a 200MB plan for $35/month until early 2022. It was an old Koodo plan.

I never used it. I don't do a lot. WiFi at home, drive to work, WiFi at work, drive to home.

Travelling with the kids I've found the new plan makes life easier.


Yeah, different people need different things out of their phones. Yet the point remains that stingy data plans still exist in developed countries. Even though people may have better devices than those mentioned in the article (it is easier to justify a one-time expense than a recurring one), there are people who are stuck with them for various reasons. Affordability is definitely one of the reasons.

Even so, we should avoid pigeonholing those who have limited access to data as poor people. There are other reasons.


in mid 00's, I had ADSL with iirc ≈300 MB included in the monthly payment, with an extremely predatory rate over the limit. I used to stretch it for 3 weeks out of a month browsing with images disabled (and bulk of my bandwidth spent on Warcraft 3).

that would last for a few hours of lightweight (not youtube/images/etc) browsing now.


> an alternate "lite/basic" mode.

In another world this mode dominated UI/UX design and development and the result was beautiful and efficient. Where design more resembles a haiku than an unedited novel.

We don't get to live in that world, but it's not hard to imagine.


I think it is sort of hard to imagine; a world populated mostly by humans that appreciate that sort of simplicity is pretty different!

If we had modern computers in 200X, we wouldn’t just have music on our myspaces, we’d put whole games there I bet.


People did, in fact, embed games on MySpace, mostly using Flash if I recall correctly.


It's not even just the middle-income countries—I have an iPhone 13, so only three years old, on a US wifi connection with high speed broadband, and it can't handle the glitzy bloat of the prospectus for one of my ETFs. I don't understand why a prospectus shouldn't just be a PDF anyway, but it baffles me that someone would put so much bloated design into a prospectus that a recent phone can't handle it.


It shouldn't be a PDF because they don't reflow text, especially important for phones


Make 2 pdfs.


there are more than 2 screen widths


> The only thing that might is a wholesale different client architecture that sacrifices features and polish to provide the slimmest code possible. That is, an alternate "lite/basic" mode. Unfortunately, this style of approach has rarely proved successful

But it is gaining popularity with the unexpected rise of htmx and its 'simpler is better even if it's slightly worse' philosophy.


Isn't that 'worse is better' philosophy?


I think it's rather a "performance is more important than functionality" philosophy.


In the case of the devices we're talking about, performance is effectively functionality.


My point exactly. By making your website fast and light, you make it easier and more pleasant to use. HTMX has a limited set of actions that it supports, so it can't do everything that people typically want. It can do more than enough though. (remember websites that actually used the `<form>` element?)


The most performant site is a blank page.


Astute observation.

It should be easy to use this as a "north star" and your only job is to not screw it up hardly at all.

Some people are just worse screw-ups than others.


Good news for bloated JS SPAs, since that is often what they look like :)


These phones aren't just in the Developing World, though. This is a USA problem too.

I work with parolees and they get the free "Lifeline" phones the federal govt pays for. You can get one for free on any street corner in the poor 'hoods of the USA. They are the cheapest lowest spec Android phones with 15GB data/month. That data is burned up by day 3 due to huge Web payloads and then the user can't fill out forms he needs for jobs/welfare and can't navigate anywhere as he can't load Maps.


I’m curious how quickly the data would be used up if only using it for the intended forms/jobs/welfare. I wouldn’t be surprised if the data lasted barely any longuer due to bloat.


When I had one of these the data only lasted me 4 days. I didn't even purposely watch any videos, but I did read a lot of news articles and many "magazine" style sites have huge video payloads that load (and sometimes play) in the background if you're not running something like uBlock. I found some sites with 250MB home pages :(


Most of those users have the advantage of not using English - and so there are often sites in their native language that cater to lower power devices.

But if you’re in that middle world country AND your official language is English, you’re gonna have a hell of a slow time.


Could you elaborate on features and polish, i.e. give some specific examples?


What I meant was, I'm pretty sure most of what is considered features and polish today would greatly improve my experience if it were removed...


I like how most people blame bosses or scary big companies. No developers appear willing to admit that there is a large cohort of not that great web programmers who don’t know much (and appear to not WANT to know much) about efficiency. They’re just as to blame for the sad world of web software as the big boss or corporate overlord that forced someone to make bad software.


I have worked with such people. When I asked them specifics about the "result" (HTML, CSS, JS) they looked at me as if I was talking another language. They came from javascript framework world, and there they didn't really think all that much about that.

My philosophy is nearly completely different, I ask myself what the minimum maintainable code is that would produce the equivalent of a well hand coded HTML+CSS+JS website. Usually the result is magnitudes smaller.

One of those people asked me how I did realtime list filtering on 1000 table rows and still have it load fast ans perform well on mobile. While that isn't really a feat, all I sid was deliver the whole data on the first request and then hide non-filtered data dynamically. That means the webserver didn't have to do anything wild, orher than deliver the same cached data to everybody who filters that list and because this was the only javascript going on on that site it was (to them) unusually performant. If you look at a comparable table row from their solution (some framework, didn't have much insight into it) the resulting html was 80% boilerplate that they didn't even use.

Web development is too entrenched and many wandered too far from the essentials of web technology.


About 5 years ago I applied for a job at a company that enabled people in rural Africa to more easily sell the goods they produced (farmers, basket weavers, what-have-you).

If you mainly target people in the US or EU, there's perhaps something to be said for not optimizing too aggressively for low-end hardware and flaky low-bandwidth high-latency connections. But if you're targetting rural Africa fairly aggressive optimisation seems like a no-brainer, right?

Their homepage loaded this 2M gazillion by gazillion pixel image downscaled to 500 by 1000 pixels with CSS. It got worse from there. I don't recall the exact JS payload size, but it was multi-MB – everything was extremely frontend-heavy, which was double ridiculous because it was mostly a "classic" template-driven backend app from what I could see.

I still applied because I liked the concept but the tech was just horrible. I don't really know why it was like this as I never got to the first interview stage, but it's hard to image it's anything other than western European developers not quite realizing what they're doing in this regard.


The website was never intended for people in rural Africa, it was intended for donors and governments in Western countries, so that the company could get juicy grant money and pay themselves to pretend to empower African farmers.


Believe it or not people outside the US actually buy and sell things too! Africa is a primarily mobile first continent with the fastest growing mobile infrastructure and 100 million new handsets sold a year (and that is predicted to double by 2025).


You better believe that I believe it. However, if they were actually trying to get rural people with bad internet connections as customers, they would make a website that was fast and efficient.


Having worked with the "big scary companies", I can say they are 100% to blame. It doesn't start with the developers but rather the budget. Unless folks at the top are tech savvy and/or have an engineering background, they typically only budget for new features and either under-budget or don't budget for maintenance and tech debt removal. And when they do budget for maintenance, it's handled almost exclusively by "maintenance teams" that are offshore and cheaper.

So you have a feature team that works on a feature for 6 months, does a 1 hour "KT Session" with the offshore maintenance team and hands them the code. The offshore team has some information on the feature but not enough to really manage existing tech debt, just to keep the lights on. And on top of this they know they are the lowest totem on the pole and don't want to get fired so they don't go out of their way to try and fix any existing code or optimize it, again just enough to keep the thing working.

Then this cycle repeats 100-1000x within an org and pretty soon you have a scenario where the frontend has 2M lines of code when it really should be 250k max. A new feature team might come on with the brightest engineers and the best of intentions, but now they have to work within the box that was setup for them. Say they have a number of elements that don't line up with their feature mockups. The mockups might be incorrect, there might have been an upgrade to the UI kit, or the existing UI kit might need refactoring. Problem is none of that is budgeted for so the team is told to just copy the components and modify them for their own use. And of course on handoff to maintenance team, the new team does not want to mess with the existing feature work so they leave it as is. Management is non-technical so they don't know the difference, and you end up with 50+ components all called "Button" in your codebase from years and years of teams constantly copy/pasting to accommodate their new feature.


That's not fair. Sure, if there's an experienced dev who _values_ efficiency on the team, who pushes for the site to be more efficient or builds it more efficiently to begin with, the page would be better off. But it's mostly about incentives. If management doesn't care, they will likely not react well to programmers spending time making the site more efficient instead of spending half the time to just get it running and then crunching through their backlog.


It usually requires less time, not more, to create a slim and efficient page.


It's very situational. If you're talking about writing a static site generator or handcoding a web page, that's mostly true, although if you're trying to not just be efficient, but as efficient as possible, things like optimizing assets are a small but additional step.

If you're maintaining a web app over a period of years, it takes at least some effort and time to keep it slim and efficient because small inefficiencies here and there start to accumulate and these tend to be more demanding even in the best case.

There are some antifeatures that Dan Luu identifies, like dynamically unloading content, that probably take a considerable amount of time to implement while degrading both the user experience and efficiency, but I doubt avoiding those is enough to ensure good performance on more complicated projects.


Definitely not true in my experience, and I would think if it were true, most pages would be "slim and efficient". Where is the business value in doing anything else at that point?


Static html sites are so easy. You can write one by hand in five minutes and it can run on a toaster. There’s more business value in ads and dark patterns.


Nothing prevents adding adds, trackers and dark patterns in a static website, I would assume most of it is served by 3rd party servers that handle the dynamic widgets. Isn’t that easy to insert in an otherwise the static site?

To be clear, not saying this is a good thing, just don’t see what prevents avoiding much of the bloat on top of that which supposedly adds “business value”.


The GP might not always be true, but no, we would not have slim and efficient sites, because of push web developers get to include all kinds of unnecessary tracking and in general bloat on websites.


The "business value" is that it is easier and cheaper to find a handful of scriptkiddies who think everything has to be done in JS, run after the latest hype, and don't even know you can send a form using plain HTML, than it is to get competent, educated devs who know when JS or even a frontend framework is the right tool for the job.


> Where is the business value in doing anything else at that point?

You think developers prioritize business value? That isn't how employment works.


True but only if you know how to. Also slim will 99% of the time be less code too.


but can it do feature x that generates more $$$ ?


Usually bad web software correlates with bad content. Therefore having a slow device is an excellent filter helping to avoid garbage.


A friend of mine who was a barber asked me how much it would cost to build him a website and I said I would do him a basic 3 page site for free, although he would need to let me know if his opening hours had changed or he needed to tell customers he was on holiday, etc.

He said, with no irony whatsoever, that he didn't realise it would be so complicated and decided not to take me up on the offer. I suspect this attitude is not the unusual with one-man businesses that have survived just fine thus far?


Unfortunately the modern web has consolidated to a point where you need to use them. For example, small local businesses that don’t have a web site but do have a Facebook page.


Having only a facebook page and forcing people on that toxic platform, is a strong indication that they do not value freedom (of the web) and ethics. Again a good filter for business / people I want to avoid.


> is a strong indication that they do not value freedom (of the web) and ethics.

I don't think the average barbershop/restaurant owner will care about that, for instance? They just wanna set up a Facebook/Instagram and done, they can now instantly receive messages from clients to make reservations and also share their stuff with posts. I bet they don't even know they can make a website.

Also, every time they end up getting a website, it's powered by Wordpress hosted in the slowest server you can imagine. And it will end up redirecting you to a propietary service to make your reservation (Whatsapp, Facebook, Instagram...)

At least that's what I see in Europe and south america, I have no clue how it is everywhere else.


I don’t think lack of technical skill implies not valuing freedom. There isn’t really alternatives that are as easy to use and have their clients already on it. They are as much a captive audience as anyone else.


seems fair to correlate "small local businesses that don’t have a web site but do have a Facebook page" with "bad content"


My local butcher provides good content without being terminally online.

Unfortunately this means needing to use Facebook to find out if they’re open on a national holiday.


idk if "only facebook" is worse than no online presence at all


"it's better for the company that I don't try, my time is expensive and any minute not spent on a feature is a waste of my salary" - is a common justification that I hear all too often.


"It's better for the company that I don't try" seems like a convenient take for a dev without the skills to have. I'd argue that performance is a feature, and if someone can't deliver it their salary is being wasted already.


Performance is a feature and management often doesn't care to optimize for it. If the market valued performance more then we would probably see competitive services which optimize for performance, but we generally don't. I'm sure there's plenty of developers that could deliver improved performance, it's just a matter of tradeoffs.

Maybe the people who care this much about performance should start competing services or a consulting firm which optimizes for that. Better yet, they could devote their efforts to helping create educational content and improved frameworks or tooling which yields more performant apps.


One issue is, that the caring about performance is often not visible. How does management accout for or measure how annoyed people get visiting their bloated websites? How many people do not know better, how fast and snappy a not bloated website can be, because they apend all their time on Instagram, FB, and co? Even if a company does measure it somehow via some kind of truly well executed A/B test, other explanations might be reached for, to explain why a user left the website, than the performance.


Isn't that what the tracking stuff is supposed to track? Measure things like how 'annoyed' people get by bounce rate and whatever other relevant metrics.


Yes, but how do you determin the actual reason for a bounce? The test would need to have all the same starting conditions and then let some users have a better performing version or something like that. But at that point one would probably rollout the better performing version anyway. Maybe artificially worsen the performance and observe how the metrics change. And then it is questionable, whether the same amount by which performance decreased would have the same effect in reverse, if the performance increased by that amount. Maybe up to a certain point? In general probably not. In general it is difficult, because changing things to perform better is usually accompanied by visual and functionality changes as well.


Add a 500ms delay for group A and compare to group B who don't have the delay. After a week of this compare the sales figures.


I doubt a company would be willing to deliberately risk losing sales by testing a worse version. AB tests are great in theory, but in practice, to test the current slow system against a faster one, you have to do the optimization work which the test is supposed to justify. That’s why AB testing is often used for quick wins, pricing points or purchase flows, but rarely the big costly questions.

Surveys could be used to explain the bounce rate, but getting feedback from people who leave is one of the hardest the recruit well for. Usability tests could help with that though.


> If the market valued performance more then we would probably see competitive services which optimize for performance, but we generally don't.

I believe there is some nuance to this due to the winner-takes-all nature of modern software services. There simply isn't a lot of choice for users or switching is expensive so companies don't do it and employees are forced to suffer through horrible performance.


Or switching happens for 90% of the feature and is called good enough, which results in now having 2 systems to maintain because the old deprecated one actually still has critical edge cases depending on it…


Performance is not a feature. Decisions about performance are part of every line of code we write. Some developers make good decisions and do their job right, many others half-ass it and we end up with the crap that ships in most places today.

This “blame the managers” attitude denies the agency all developers have to do our jobs competently or not. The manager probably doesn’t ultimately care about source control or code review either, but we use them because we’re professionals and we aim to do our jobs right. Maybe a better example is security: software is secure because of developers who do their jobs right, which has nothing to do with whether or not the manager cares about security.


I can agree to a point, but it's not very scalable. Imagine if the safety of every bridge and building came down to each construction worker caring on an individual level. At some point, there need to be processes that ensure success, not just individual workers caring enough.

Secure software happens because of a culture of building secure software, or processes and requirements. NASA doesn't depend on individual developers "just doing the right thing", they have strict standards.


Probably a bit of both.

Client Side Rendering ( Regardless of Frameworks ) is hip, and gets more media attention. Sometimes backed by VC. It is new, it is complex. And fits both the hype cycle, software engineers complexity attraction, and Resume Driven Development model. And just like the article stated, it is suppose to bring so many good things in its idealogy to the table.

Since majority of software developers wants to works on it, so their Resume gets a tick and could jump to another job later. Management now faces lots of application for these technology and zero for old and boring tech.

>great web programmers who don’t know much (and appear to not WANT to know much) about efficiency.

Remember when Firefox OS developers thought $35 dollar Smartphone will one day take over the world and CPU will be so much faster due to Moore's law, performance will soon becomes irrelevant.

I mean that is like Jeff hates Qualcomm, without actually understanding anything about Mobile SoC business nor the CPU behind it. And how ARM's IP works. A lot of people dont want to know "why" either.

A more accurate description and also a general observation. Most software developers and especially those on Web Development have very little understanding of hardware or low level Software engineering. Cloud Computing makes this even more abstracted.


People who never worked with some of the bloated sites often forgot third party in the bloating.

Marketing team mandating inclusion of at least one "Tag Manager" (if they are especially bad, there will be multiple).

A "Tag Manager" is a piece of JS that is installed together with an API key in the site... and then it downloads whatever extra JS that was configured for given API key. The actual site developer often has absolutely no control over it (the closest I got once was PoC-of-PoC where we tried to put even inclusion of tag manager behind an actually GDPR-compliant consent screen).

Marketing team gets to add "tags" (read, tracking, chat overlays, subscription naggers, whatever), sometimes with extra rules (that also take time processing!), all without involving the development team behind the site.


Blaming tag managers and marketing departments is quite common and yes while they are problems on some sites many developers overlook the impact of their technology choices e.g. client side rendering, JS based components etc


There's a reason why I speak of "third party", though I guess it might be unclear in english - my bad.

There are three parties involved in the bloating. Management prioritising certain things is one. Developers (including here both programmers and designers and others etc.) not caring enough or otherwise making choices that lead to bloat is second. Marketing team with power to require problematic things added or just going crazy with tag manager is third.

All three are involved in the "bloating crisis".


So marketing department is responsible for web being such a sad and painful experience?


money is the top reason for ppl doing stupid shit, yes


Not the only department.

But consider how much of the bloated JS tends to be from external parties, and pretty much everything that isn't CDN-ed frameworks will be stuff either required by marketing, or flat out added through the use of a tag manager.


> there is a large cohort of not that great web programmers who don’t know much

I think you mean “programmers” not just “web programmers”. I’ve worked with plenty of bloated over-engineered Java and C# codebases that take several minutes to start on a very fast developer machine with 32 gigs of RAM. Sometimes, the worst offenders use “lower level” languages! The performance averse problem is endemic in the entire field, not just in the web.


The whole point of hierarchical organizations is that those higher up have more influence than those at the lower tiers. Cutting the blame in half doesn’t make sense.


we need some html tag or attribute for slow network detection.

instead of this nasty js feature detection that 99% of time no one does.

prefers reduce motion was a good start. although its rarely respected.


Should be easy enough for the server to detect when the connection is slow. That could be done now without having to update the clients and give them more things to process.


Is it really hard to believe that the solutions on offer are usually giant piles of steaming crap that do way more than they should but are nevertheless easy to get set up and get going? When programming ecosystems get big, they accumulate a ton of ways of doing things and people keep trying to put a layer on top on top of a layer on top of a layer (like floors in an old house). It doesn't matter if a thing underneath is O(n); someone will put another O(n) thing on top of that that represents all its data as strings and uses regex or horribly-inefficient JSON or something. Very few people ever think things from the ground up.


You are right.

I browse the web on Firefox with uBlock Origin, 3rd party cookies disabled, and so on.

So I am missing the bloat most people talk about.

But still apps like Clickup are really slow. It's just bad software.


It might also be people not willing to admit that they don’t know how to optimize website performance. And if you have a somewhat complex web app it can be complicated, especially if your stack doesn’t help you (looking at you React).


>I like how most people blame bosses or scary big companies.

It ALWAYS starts at the top, no matter how you slice it. Why are the incompetent devs there in the first place?


I only recently moved from a 6-year old LG flagship phone to a shiny new Galaxy, and the performance difference is staggering. It shouldn't be - that was a very high-end phone at release, it's not that old, and it still works like new. I know it's not just my phone, because the Galaxy S9s I use to test code have the same struggles.

I would like to have seen Amazon in the tests. IME Amazon's website is among the absolute worst of the worst on mobile devices more than ~4 years old. Amazon was the only site I accessed regularly that bordered on unusable, even with relatively recent high-end mobile hardware.


I have noticed with two 7 year old Snapdragon 835 devices that RAM and running a recent Android version makes a huge difference.

I daily drive a OnePlus 5 running Android 14 through LineageOS and the user experience for non-gaming tasks is perfectly adequate. This phone has 6GB of ram, so it's still on par with most mid-range phones nowadays. My only gripe is that I had to replace the battery and disassembling phones is a pain.

Meanwhile a Galaxy S8 with the same SoC, 4GB of memory and stock Android 9 with Samsung's modifications chugs like there's no tomorrow.

I can understand that having two more gigabytes of memory can make a difference but there is a night and day difference between the phones. Perhaps Android 14 has way better memory management than Android 9? Or Samsung's slow and bloated software is hampering this device?

Either way it's irritating to see that many companies don't test on old/low-end devices. Most people in the world aren't running modern flagships, especially if they target a world-wide audience.


This is what I miss from the removal of serviceable components on MacBooks. Was a time I would buy the fastest processor and just okay memory and disk, then the first time I got a twinge of jealousy about the new machines, buy the most Corsair memory that they would guarantee would work, and a bigger faster drive. Boom, another 18 months of useful lifetime.


Is the total useful lifetime more than MacBooks with non serviceable components? I see people around me easily using Airs for 5+ years.


Yes, but that's the slow-boiled frog syndrome. I use my computers for years as well, and whenever I get a new one I think "wow, why didn't I switch sooner, this is so much snappier".


As a counterpoint, I have a 2015 MacBook, a 2015 iMac, and a recent Apple Silicon MacBook. Of course I do Photoshop, Lightroom, Generative AI, etc. on the Apple Silicon system. But I basically don't care which system I browse the web with and, in fact, the iMac is my usual for video calls and a great deal of my web document creation and the like.

I suspect that people who have somewhat older Macs (obviously there's some limit) who find their web browsing intolerably slow probably have something else going on with either their install or their network.


>I do Generative AI,

This makes me call into question literally everything else in your post.

You might be able to do CPU based for a few trials for fun, but you arent running LLMs on CPU on a daily basis.


I do some local image generation now and then (mostly using Photoshop). Are you happy now? My only point was that any CPU/GPU-intensive applications I run (and really most local applications) I do on my newish computer. But most stuff I run is in a browser.

The relatively little LLM use I do is in a browser and it doesn't matter which computer I'm doing it on.


I’ve been a Mac user since 2003 or so and I can confidently say my machines last 6-7 years as daily drivers then sunset over 2-3 years when I get a new computer. I always go tower, laptop, tower, laptop. They have a nice overlap for a few years that serves me well.


My MacBook Air (11-inch, Early 2014) is my only computer. I still don't feel like changing it so far...


Amateur… I am using a 2009 15’ MacBook Pro Unibody, with a swapped SuperDrive to SSD, another main SSD and RAM boosted to 8Gb. OpenCore Legacy to update to a relatively recent version of MacOS. The only thing that is so annoying is the webcam that doesn’t work anymore, and a USB port is dead also.

So sad this kind of shenanigans are not possible anymore.


Pfah, showoff. My 2005 Thinkpad T42p crawls circles around that thing - slowly. Maxed out to 2GB, Intel 120GB SSD with a PATA->SATA adapter (just fits if you remove some useless bits from the lid) and - what keeps this machine around - a glorious keyboard and 1600x1200 display. It even gets several hours on the battery so what more could you want?


Mmh… I see that we definitely have people of good taste around here.


I have one of these with a MacBook Pro 6,2 that I did the same upgrades to. However I finally decided to retire it when 2nd replacement battery swelled and Chrome stopped supporting OSX 13.

It didn't look like a good candidate for OpenCore Legacy because of the dual video cards, but it feels so gross recycling a perfectly working computer.


I run the one from 2011 (16 Gb of ram though) and it runs highly minimalistic Arch Linux. So far so good.


My air isnt that old, and I'm eyeing a new one...

I find that a lot of my work is "remote" at this point. Im doing most things on Servers, VM's, and containers on other boxes. The few apps that I do run locally are suffering (browser being the big offender).

Is most of what you're doing remote? Do you have a decent amount of ram in that air?


no, most of the work i do is local, but it's fairly easy stuff, some statistical software, excel, word, browser. And my browser is not suffering that much, perhaps because i have 8GB of ram, and i visit simple websites. Using an adblocker is fundamental tho.


i have an Air from 2011 or 2012 that is out of storage with just the OS installed. I can't update or install any other software because the most recent update installed on it capped out the storage. Low-end windows laptops (the $150-$300 at walmart type) have this same issue. 32GB of storage and windows takes 80% of the space, and you can no longer fit a windows update on it.

I still have the air with whatever the macos is, but as soon as i have a minute i'm going to try and get linux or BSD on it. I'm still sore at how little use i got out of that machine - and i got it "open box" "scratch and dent", so it was around $500 with tax. I got triple the usage out of a 2009ish eeePC (netbook)


You could try ChromeOS Flex on it?


The main thing that convinced me to get on the ARM macs is the heat and battery life(which kind of go together). It's never uncomfortable on the lap.


Controversial counterpoint: Having standardised hardware causes optimisation.

What do I mean?

In game development, people often argue that game consoles hold back PC games. This is true to a point, because more time is spent optimising at the cost of features, but also optimising for consoles means PC players are reaping the benefits of a baseline decent performance even on low end hardware.

Right now I am developing a game for PC and my dev team are happy to set system requirements at an 11th generation i7 and a 40-series (4070 or higher) graphics card. Obviously that makes our target demographic very narrow but from their perspective the game runs: so why would I be upset?

For over a decade memory was so cheap that most people ended up maxing out their systems, the result is that every program is electron.

For the last 10 years memory started to be constrained and suddenly a lot of electron became less shitty (its still shitty) and memory requirements were something that you could tell at least some companies started working to reduce (or at least not increase).

Now we get faster CPUs, the constraint is gone, and since the M-series chips came out I am certain that software that used to be useful on intel macs is becoming slower and slower. Especially the electron stuff which seems to especially perform well on M-chips


I want to research this route more but the camera is an important component to me. I suspect their is a model of phone from 5-10 years ago that has a an under-the-radar stellar camera and I would find "perfectly adequate". ("perfectly adequate" is my favored state for most tech solutions.)


Yeah the camera is the only feature that would really make me want to switch phones. In my case it's more about being a broke CS student without a job lol.

But the low-end device thing still stands. At least here in Argentina where I live most people can't buy a $1000+ phone without going into debt or saving money for a stupid amount of time to get it. Some people that really can't afford to do so still buy them though. Maybe it is reasonable for some but I never saw any appeal in spending so much money (comparatively to a monthly salary) on a non necessity. I happily spent that kind of money on a PC to use for work/study, but a phone? Nah.


Same! The camera is the only part of the phone I want to spend real money on.

Beyond personal preferences, I live and work in an area of California where people could greatly benefit from easily accessible phones so I'm interested in what's possible.


The Huawei P10+, released in 2017, has very good Leica optics, on par with much newer iPhone or Galaxy devices.

https://www.gsmarena.com/huawei_p10_plus-8515.php


I don't think the RAM is the difference-maker. The old LG phone in question is a V35, which has 6GB and a Snapdragon 845.


Did you try disabling JavaScript on Amazon? It actually doesn't function too badly. I know, I know, you shouldn't need to do it and I agree.


I fiddled with NoScript but I must have done something wrong because I broke the site entirely.


I recently visited Brazil and had my shiny new phone snatched from my hand ... now with my spare 4 years old phone, frankly dont see any difference. But I use Firefox with all the ad blockers, maybe that helps.


I run Firefox with uBO and NoScript. Based on the other replies, OS version may play a role.


I have a Palm Phone. I generally consider web browsing to be almost impossible no it at this point lol


Are you able to change the DNS on it to NextDNS or LibreDNS: https://libredns.gr/

Blocking ads and trackers might help you to browse the web.


I block ads with uBlock Origin in mobile Firefox. I can use simple sites like hacker news, and I can actually use old.reddit.com if I'm willing to zoom in and out a bunch.

The Palm Phone lags with just about everything honestly, but I like the form factor of having a phone the size of a credit card. But since software only gets slower, most of the web is just beyond it at this point.


I have no issues with Amazon on my iPhone 8 running latest iOS 16


Interesting that you have such problems with Amazon. I‘m using an iPhone XR (5,5 years old) and don’t have any problems using Amazon in the browser (Safari). And I’m on the latest iOS (17.4).


The iPhone XR was 4x as fast as the Galaxy S9 in web browsing https://images.anandtech.com/graphs/graph13912/95169.png


OS version may have an impact. The Galaxy S9s both run Android 9. That LG phone is stuck on Android 8 because AT&T sucks and never got around to updating their shitware-riddled Android fork. If they had, I wouldn't have needed to spend spend $800 on a new phone. I'm not bitter about it at all, though.


iPhone browser performance has run circles around android browser performance on equivalent hardware for like the last 10 years or so. It’s really the secret sauce of iOS.


Yeah, by the way browsing on iPhone 6S Plus is quite okay, compared to even MacBook Pro (2011, but that’s a laptop!), I would say.


iPhone has exceptional long lasting performance. I have a 5 year old iPhone and it still runs smooth like silk.


Related: Too much of technology today doesn't pay attention or even care to the less technologically adept, either.

Smartphones in my opinion are a major example of this. I can't tell you the number of people I've meet who barely even or don't even know how to use their devices. It's all black magic to them.

The largest problem is the over-dependence on the use of "Gesture Navigation" which is invisible and thus non-existent to them. Sure, they might figure out the gesture bar on an iPhone, but they have no conception of the notification/control center.

It's not that these people are dumb either, many of them could probably run circles around me in other fields, but when it comes to tech, it's not for a lack of trying, it's a lack of an intuitive interface.


It doesn’t help when you get a new iphone it doesn’t ship with its documentation. You have to get to the actual documentation page on apples site, and then dig a little to get to a page that looks like this (1) that merely outlines a few possible gestures. Not which ones to use when beyond a one sentence example. And this is just for the OS. What apps ship with documentation that outlines how these gesture functions are used in their app?

https://support.apple.com/guide/iphone/learn-basic-gestures-...


Despite being a young person in tech, I find myself totally at a loss when presented with an iPhone. I gave one a try a few weeks ago (went with a Galaxy instead) and my fiancée had to walk me through in baby steps.

Android apps tend to be decent about giving a little tutorial when they open, highlighting buttons with little blurbs explaining their use. Is this a trend with iOS?


I don't know, I've seen mature people who couldn't operate a cassette deck and likely would have trouble with a typewriter. These people definitely grew up around these devices.

I don't think (modern) technology is at fault here.


It appears to me, as an outsider, that interfaces are designed with a "one size fits all" approach, at least at the prestige end of town. Instead of allowing the user to choose design and interaction that works for them, the designer (or product owner) acts as if they know what's best for all users.


What would the alternative look like? Applications shipping as a bag of arrangeable buttons and widgets that the user assembles into pages?


Actually, I find this highly ideal. I wish there was a button to press which would switch the interface into an almost Visual BASIC GUI editor like thing, permitting me to edit the arrangements. Also, I would like it if such an OS was more strict on forcing its interface objects (think: SimCity 2000 for Win95 with GDI-integrated GUI good, SimCity 3000 with Fisher-Price full screen toy interface bad). Also throw out much of the post- Windows 2000/KDE 3.5 desktop user interface 'innovation' but make all things editable in layout. I WANT MY COMPLICATED BUTTON GRIDS! :^(


I think this tends to sound like a better idea than it is. It's good for power users who want to optimise their UI to suit, but regular users aren't going to do that.

Gesture navigation's lack of discoverability is a problem for sure, although I'm not sure how to best address it (people aren't likely to sit through tutorials...)


You have to do tutorials. Actually, why isn't there a UI tutorial app which is basically just practice for all the different BS ways UI writers invent to do the same things? It should come pre-installed. Like a DuoLingo for user interfaces! Heck, make sure it covers as many common interface paradigms as possible!


Siemens PLM NX 10 is another example of what I like in an interface. The GIMP big time as well for its customizability. You know what I don't like? Gnome. I curse Gnome 3 (namely, the design cancer Gnome fell to early on) for why KDE has yet to recover to the comfiness of KDE 3.5. Apple is another hate.

I want a computational environment, I am a cyborg! I build my environments to my specifications. I am a privacy and control absolutist with these devices, because they are cybernetic extensions of my mind. SV: Stop being over-opinionated pricks trying to monetize every last drop of attention for every bottom-pocket penny in microtransactions. What we develop here is far and beyond more spiritual than we can all imagine. The utter lack of owner/user sovereignty shown lately, basically since iPhone and Facebook, captured in the term Enshittification, is absolutely appalling.

Anyway, thank you for reading my unspellchecked schizo-ramblings. Now carry on with the great monetization, metatron hungers!


I think themes might be closer to what I'm imagining. For example, material versus skeumorphic. But as the OC was talking about interactions, I'd be interested in seeing a scale of assistance or explicitness. For example, at the "I've never used computers before" end of town, there should be lots of inline annotations about what buttons do, lots of dropdown lists, etc. At the "power user" end of the scale there would be keyboard shortcuts, icons, tooltips etc.


The smartphone world is too crooked to have an alternative IMO. Just keep eating everything the vendor gives you on the top of shovel.


Or the user picks from a set of assembled by someone else


Congratulations, you just invented OpenDoc.


> The largest problem is the over-dependence on the use of "Gesture Navigation" which is invisible and thus non-existent to them.

This is uniquely an iPhone problem, not a smartphone problem. It’s been one of my biggest gripes after switching from android. Where the heck is my back button? Home button? Any buttons? I really despise Apple’s obsession with minimalism and will be switching back to android when this phone dies.


This article is basically unreadable for me 48 y/o on desktop). In the dev tools I added the following to the body to make it readable:

    font-size: 18px;
    line-height: 1.5em;
    max-width: 38rem;
Now look how readable (and beautiful) it is. I read a lot of Dan Luu's posts, and each time I have to do this sort of thing to make it readable.

Seriously, techies, it's an extra 64 Bytes to make your page more readable.


I'm 53 and I'm at least five years behind getting my specs sorted out - they are currently perched right on the end of my nose now and I have to get the angle right sometimes (astigmatism).

That page is nearly fine for me but I just hit CRTL + to scale up. That works for me.

That page is pure text with no or at least minimal fiddling. You have your solution for your use case and I have mine. A blind reader will also have their solution, so they can even access it. Thanks to the simplicity of the source: all solutions to accessibility are also going to be reasonably simple.

I think that Dan understands how to communicate effectively - keep it simple and don't assume that eyes will read your words. You can trivially (and you do) fiddle with the presentation yourself for your own purposes.

I think that if you don't like the presentation of something like this then you could reformat it yourself, prior to engagement. Dan has kindly provided his message as a simple text stream that can be trivially fiddled with.


> That page is nearly fine for me but I just hit CRTL + to scale up. That works for me.

How do you do CTRL++ on a mobile phone?


Click on reader view. Then you can customise reader view from the menu and set it to how you want it. Then every time you click on reader view you can read in the font and size that works for you.


Pinch to zoom, which since basically pinch to zoom was invented should reflow elements.


Pinch to zoom magnifies part of the page. That’s less helpful for text because you have to scroll the smaller viewport to read a complete line of text.

On iOS, there’s a text scale button in the URL bar which does the trick.


Pass the page through a screen reader and listen to the ensuing podcast.

Try using some imagination!


I assume you use some sort of gesture. Hit F1 to find out how to do it.


> How do you do CTRL++ on a mobile phone?

No idea but you seem to have managed it 8)


In Brave you can do Accessibility - Text Scaling


I think your mods are sensible, however if Dan Luu added those CSS rules himself, there would be comments on here lamenting the low density and "excess whitespace". Luu's audience, on the whole, probably prefers the relatively unstyled approach.


I disagree. The user can change the window size, font size, colours, etc according to their own preferences.

> I read a lot of Dan Luu's posts, and each time I have to do this sort of thing to make it readable.

You shouldn't have to. You should be allowed to add a CSS file which can apply to multiple files, and then use that, instead of having to do it for each file individually.


You can change your browser's default font size if you find it too small. It's in Firefox's main settings page. Websites shouldn't force "font-size: 18px;" because it then makes the font smaller for users who picked a larger font in their browser.


I agree that they should add some minimal CSS. But using your browser's Reader View also works, a click rather than multiple steps in DevTools.


Funnily enough, Dan calls out the differences of opinion of the styling of his site starting at this paragraph:

> Just as an aside, something I've found funny for a long time is that I get quite a bit of hate mail about the styling on this page (and a similar volume of appreciation mail). …


I just activate reader mode on his pages, works great. (Not disagreeing with you, just stating another workaround)

Also wish his pages had dates on them (one or both of first posted / last updated) AFAIK he intentionally leaves them out, I don't get why.


> AFAIK he intentionally leaves them out, I don't get why.

Some people like to brag about the timelessness of their articles [1], and that might be one reason. (I personally don't fully agree though, even the linked original WikiWikiWeb page has a last edited date.)

[1] https://wiki.c2.com/?WikiNow


The first time I saw this blog posted on HN I wondered how it could possibly be popular with such horrendous layout.

The conclusion I came to is that the audience is very tech-savvy and is used to activating Reader Mode when they encounter pages like this.


It's more of a hipster thing imo. For some people since it's minimalist and looks "old" , it must be good. Like I get keeping it simple but man it's CSS..


Yep, exactly. It's fashion. FOUC-chic.


Dan’s site has been like this for over a decade. If it’s a fashion, then he’s one of the creators of it.


Brutalist web design has been a thing for a while: https://www.washingtonpost.com/news/the-intersect/wp/2016/05...

Some of it can be appealing, when basic ergonomic needs are met (readable text size and line length, adequate margins, and so forth). Most is just brutally pretentious, IMO.


Actually when I hit pages like this, I use the increase font size buttons. I tend to do this on phones too, especially. Yes, reader mode is also an option, but just bumping up the font size works too. You could also go back to the days when we had 800x600 monitors and 16px tended to be just the right size for that. ;-)


I went with the other techie solution: resizing my browser window.


Since I have a ton of tabs open and jump between them, this ends up not being a solution I use anymore.


How do you do that on mobile?


Who read article like this on mobile? In a pinch, I'd just activate Reader Mode (Safari, iOS), or more likely save it for reading on a bigger screen (tablet, laptop,…)


> Who read article like this on mobile?

The irony of this on an article about how developers ignore users on low-performance mobile devices


Read it on iOS Safari, without reader mode. Worked great.

Only thing that annoyed me is that there are very lengthy appendices. Thus the scroll bar suggests the main article is much longer than it actually is.


I just read it on Firefox mobile without reader mode.


Its already perfectly readable on mobile either vertically or horizontally (a rare affordance these days)


Might be for you, but the tiny text and cramped line height makes it painful for me

Pretty sure the text size is likely to be marginal from an accessibility PoV, and the line length doesn’t aid readability


This page doesn’t specify _any_ font size. It relies on your browser to choose an appropriate size instead. If the text is too small for you to read, then your browser settings for default font size are wrong.


Firefox on Android has a button to activate Reader Mode right in the URL bar.


Flip the phone into portrait mode.


The text fills the entire screen on mobile. That's a lot better than reading something where there's 50% of whitespace.


Or better yet a postage stamp of text between two ad players and a header and footer banner


It’s pretty terrible on my phone too. Almost no margins and small font. Thankfully Reader Mode works in Safari, which fixes everything.


This page doesn’t specify _any_ font size. It relies on your browser to choose an appropriate size instead. If the text is too small for you to read, then your browser settings for default font size are wrong.


Then adjust your browser settings to your preference, because that certainly isn't mine either.

I've had to remove "max-width"'s from a ton of sites using my filtering proxy. My window is this big, I expect your content to fill it!


I'd prefer to see the grey text trend die honestly. I think my number one style-rewrite is just setting `font-color: black` on things.


> In the dev tools I added the following to the body to make it readable

For cases when you don't agree with styles there is Reader Mode. Your way works also, but Reader Mode just simplier, it is just one click away.


True, although not all browsers have Reader Mode. Chrome didn't have it until last year, and the version they built is a sidebar, unlike most Reader Modes. This is probably because they want to make sure ads are shown alongside the Reader Mode.


In reader mode the colors in the table disappear. Ironical the author does style that.


Why don’t you just change your browser’s default font size?


> max-width: 38rem; Now look how readable (and beautiful) it is.

How is it readable when you're limiting text width and not taking advantage of the whole screen you paid for?

[Turning 48 next month and wearing glasses.]


FYI the optimal line length is 50-75 characters and that has been the standard for text since the type writers. You don't want to move your neck when you read a single line that's kinda silly.


If you have to move your head when reading text in your web browser, then your web browser’s window is too wide. Narrow it until you are comfortable, but don’t try to impose your limitations on other people.


> that has been the standard for text since the type writers

I have a feeling it was the standard because they used the minimum font size to make the letters readable, and that's how much it fit on the physical page width. Which was standardized before typewriters for unknown historical reasons?

> You don't want to move your neck when you read a single line that's kinda silly.

I don't have to move my neck to read the article spread across the full width of my monitor. On 13" laptop or 24" desktops. Are you using a 21:9 utrawide?


It's been this way forever because it's not particularly difficult science and is extremely easy to test for so there are probably thousands of papers covering this. Here's a good summary by Baymard Institute[1].

Also WCAG recommends line length set to <80 characters too [2]. I'm not sure what else could make this more convincing or official.

1 - https://baymard.com/blog/line-length-readability

2 - https://www.w3.org/WAI/WCAG21/Understanding/visual-presentat...


> Also WCAG recommends

"recommends". Want to deny me the option of longer lines?


Just like lelanthra currently applies custom CSS `body { max-width: 38rem; }` to this page, if the page had that maximum width set by default, you would equally have the ability to apply the CSS `body { max-width: unset; }` to the page. So you would not be denied the option of longer lines.


If you can't read font-size: 14px, you got your resolution/scaling/screen size wrong. The default text size is similar to the standard text size of OS UI controls. If you can't read them, I'd suggest to reconfigure your setup: change resolution, change scaling, or configure the default zoom level.


Do you have any idea of the layers of tooling you must use these days to produce those 64 bytes, and how each of those layers change and remove was was fed from all the other layers? To get exactly those bytes out the other end of the tools would be a herculean effort.

Because we can’t just go around trying to understand basic web-based development without the frameworks … can we?


as a data point youtube is unusable on raspberry pi 3. This happened within the last year, because prior to that you could "watch" videos at about 10-15FPS which is enough, for instance, to get repair videos in a shop setting (ask me how i know). When the raspberry pi model B - the first one released - came out, you could play 1080p video from storage, watch youtube, play games.

I'm not sure what youtube is doing (or everyone else for that matter.)

If we're serious about this climate crisis/change business, someone needs to cast a very hard look at google and meta for these sorts of shenanigans. eating CPU cycles for profit (ad-tech would be my off the cuff guess for why youtube sucks on these low power devices) should be loudly derided in the media and people should use more efficient services, even if the overall UX is worse.


Could it just be due to lack of hardware video decoding? The Pi3 has x264 HW acceleration and youtube started using other codecs a while ago.


I have no idea if it still works, but the "h264ify" browser extension used to be great for working around this issue (by forcing youtube to serve h264) https://github.com/erkserkserks/h264ify


i did a full apt dist-upgrade to try and get the h264ify plugin to install and if i remember correctly i never was able to get it to install. I upgraded from "chromium" to "chromium-browser" and set all the compositing and other settings recommended for the RPI.

and to reply to another sibling, "yt-dlp" isn't workable, this is for a senior citizen that does small motor repairs.

I got an HP elitedesk that's a few years old coming in monday to replace the RPI; hopefully that will last another 3 years before google et al decide to "optimize" again.


RPI 3 for a senior citizen seems like a poor solution in the first place.

I would have opted for a small business-pc that is x86 based and 3-4 years old.


a used laptop that can play youtube videos can be had for about the same money


ytdl-format=best[vcodec!*=vp9]


Probably. I remember when YouTube switched to H.264 (it might have been some Flash-based video before that). I had an older Mac mini hooked up to my TV at the time and suddenly video framerates dropped to an unwatchable level because they saved their bandwidth (and mine but I didn't have to care about my Internet service was not metered) at the expense of client-side processing.


Is YT so impoverished they can't manage some sort of negotiation mechanism that includes x264 and makes it work?


They encode videos ahead of time and they likely decided that whatever hardware you’re judging them by is only .9% of the market so fuck those guys.

Big companies use percentages in places they shouldn’t and it gets them in trouble. .1% when you have a billion users is a million people you’re shitting on.

For me that might be a dozen people. Very different.


Encoding and storing billions of videos in a format used by 0.1% of users feels like a waste though


robustness is only wasted if you're lucky


The context above was exclusion of people based on income level.


Supposedly, the whole point of Google financing “open codecs” was for them to break free from MPEG codec licensing. I imagine the total amount of fees had a lot of zeros. So, yes, each time they don't serve H.264 (unless absolutely required) results in saving a lot of money.


That might be more on the browser that you’re using. It might be saying “yes I can play this format” to a format it can barely play.


every yt video is available as x264 but vp9 is cheaper (smaller) and has better quality


YouTube is definitely getting heavier. My early 2021 MacBook Air (Intel) now gets random video pauses under moderate load, something that never used to happen.


Could be just ads that adblocker tries to block. Google is trying new ways all the time to bypass adblockers.


>If we're serious about this climate crisis/change business, someone needs to cast a very hard look at google and meta for these sorts of shenanigans

By all accounts client devices' energy consumption is a rounding error in terms of contribution to climate change. Going after them to solve climate change makes as much sense as plastic straw or bag bans.


IT is emitting around as much as aviation, and that was a surprise to me, most of it are due to client devices. Don’t have the source at hand at the moment though. And of that, most emissions are upfront until you buy it. Buying a new device because it’s not fast anymore causes emissions, not running it. Think about e-waste as well.


>IT is emitting around as much as aviation

What counts as "IT"? It's most certainly a superset of "client devices", which is what my and the parent comment was talking about.


It has a cumulative effect and drives the continual "upgrade" cycle. When you consider the life-time of an average mobile device, and the resources required to manufacture and ship them, it's a not insignificant problem.


Random source from google[1]:

>Berners-Lee writes that in 2020, there were 7.7 billion mobile phones in use, with a footprint of roughly 580 million tonnes of CO2e. This equates to approximately 1% of all global emissions

Of course, not everyone is replacing their phones yearly. Another source[2] says the average consumer phone is 3 years old. That works out to 0.33% of global emissions, assuming the phones aren't recycled/reused to developing countries. Even if assume people are upgrading their phones for app/web performance reasons, the impact is far less than 1%.

[1] https://reboxed.co/blogs/outsidethebox/the-carbon-footprint-...

[2] https://www.statista.com/statistics/619788/average-smartphon...


To be clear, these emissions include the manufacturing cost, which for reasonable users seems to make up ~80-90% of the carbon footprint. The power usage of the phone itself and associated data centres etc is only a small portion.

It's still somewhat surprising that one could attribute 0.2% of global emissions solely to phone power consumption... I would have expected it to be lower.


Isn’t that quite huge number to be fair?


Compared to a single person's emissions? Yeah sure, but that's because anything multiplied by 8 billion people is going to be huge. The same could be said for plastic bags and/or straws. In relative terms it's absolutely minuscule, and in terms of low hanging fruit it's definitely not the top. You'd be far better off figuring out ways to decarbonize the electricity grid (40%) or the transport system (20%)


I would imagine for phones and laptops the extraction of materials (rare earth metals to make fancy new chips, lithium for batteries,etc) is probably the bigger issue.

Having gotten away from 500+ watt desktops as the standard for light non-gaming computing has been a win in the energy consumption court.

I think there are lots of good reasons to avoid the upgrade cycle but energy consumption of the end device itself probably isn't it. (Embodied energy of the devices, environmental impacts of mining, no good EOL story for ewaste, etc)


> By all accounts client devices' energy consumption is a rounding error in terms of contribution to climate change.

It adds up? How many devices are there? Tens of billions?

Web 345 devs just don't care because the costs are borne by the customer.


The customer doesn't care either because a page that takes 5s longer to load on a 1W TDP SoC costs them around one-millionth of a penny. Even if you're refreshing 100 times per day it's only around 0.05 kWh per year, which at any reasonable electricity prices is a sum that's simply not worth worrying about. You'd get more savings from getting people to turn off their led light bulbs for a few minutes.


US centric electricity prices view :)

Also, it's not just your site. It's every site. And the customer pays all those millionths of a penny added up out of their pocket. And all those 5 second delays out of their lifetime.

Edit: btw at a quick glance you underestimated cell phone soc TDP by a 2-4 factor.


A single use of an electric kettle sounds like it would completely dominate this consumption.

The time cost is certainly the greatest expense here, power is cheap in consumer computing contexts, generally speaking (at least nowadays with most things racing to sleep), and is mostly relevant because of battery life, not power cost.


> A single use of an electric kettle sounds like it would completely dominate this consumption.

But at least that gets you tea, instead of engagement.

You've got to put X joules in to boil Y liters of water. No choice there, except giving up on the tea.

You can greatly reduce the joules necessary to see cat photos though. And you don't have to give up on seeing the cat photos.


I use Invidious for browsing the site, and watch the actual videos via a script that deobfuscates and gets the actual stream URL and then passes that to VLC.

As another data point, YouTube a decade ago would've been perfectly fine on that hardware too. The culprit is web bloat in general, and more specifically the monstrosities of abstraction that have become common in JS.

Even for those who don't believe at all in "climate crisis", there is something to be said for the loss of craftsmanship and quality over time that's caused this mess, so I think it's something everyone across the whole political spectrum can agree with.


'apt install yt-dlp mpv'

then put this in '.config/mpv/mpv.conf' to twart hw requirements

ytdl-format=best[height<=?720][vcodec!=vp9]/bestvideo[height<=?720][vcodec!=vp9]+bestaudio/best[vcodec!*=vp9]/best

and pass url's to it (i use 'play-with' ff extension)


Can you share that script? Also using invidious, but passing to vlc sounds good for saving cpu cycles.


Just use something like this:

    mpv --demuxer-max-bytes=1024MiB --vo=gpu --opengl-es=yes --ytdl --ytdl-format="best[height<=800]" "$url"


We need some watchdog group that watches page weight across sites and users and names and shames them.

Maybe they could do that Consumer Reports style, or maybe it’s an add on the works a bit like Nielsen ratings.


Its worth trying out different browsers. In my experience Chromium based browsers are a bit faster than Firefox on really low end devices (Pinephone, ...) as long as you have enough ram (>1Gb?).

E.g. On the OG Pinephone a 720p video on Youtube is running smoothly in Chromium, but not Firefox.


I've got an old Roku box that has started rebooting after a few minutes of playing youtube videos.

In your case, maybe pulling the video with yt-dlp then playing it works...


YouTube was not tested because monitors can't handle CMYK, and we need a lot of that extra coal black to color the results.


I had to upgrade my 12" Macbook because Youtube Music brought it to a crawl. I could play music or work, but not both.


I used a 12" Macbook as my main development machine. It ran IntelliJ with Python/Django applications, Postgres & Redis running in parallel (along with Safari, Mail, etc) around 2018-2020 just fine.

Tried it somewhat recently around Ventura and the machine clearly appeared to be struggling with the OS alone. So we had a machine that used to be capable of actual, productive work, and is now seemingly struggling at idle? It doesn't look like the new OS brought anything new or useful to the table (besides copious amounts of whitespace) either.


I got mine in 2022-2023, so I couldn't tell how it changed. However I wouldn't be surprised. Operating systems used to be much snappier.


That's absurd. I remember using winamp (and the skin compatible Linux clone, I forgot it's name) streaming internet radios while programing a toy OS in 2004. I could listen to music while compiling and running the BOSHS emulator on my AMD Atlon CPU with a whooping 256MiB of RAM.


That Discourse guy is a classic example of someone designing their product for the world they wished existed instead of the world we actually live in. Devices with Qualcomm SoCs exist in billions, and will keep existing and keep being manufactured and sold for the foreseeable future. No amount of whining will change that. Get over it and optimize for them. People who use these devices won't care about your whining, they'll just consider you an incompetent software developer because your software crashes.


Or they take the route to say “not for you”


It only works when you have a coherent vision of your product. "We can't be assed to optimize our code because we value DX above all else" certainly isn't that.


I'm normally a fan of Dan Luu's posts but I felt this one missed the mark. The LCP/CPU table is a good one, but from there the article turns into a bit on armchair psychology. From some random comments coming from Discourse's founder, readers are asked to build up an idea of what attitudes software engineers supposedly have. Even Knuth gets dragged into the mud based on comments he made about single vs multi-core performance and comments about the Itanium (which is a long standing point of academic contention.)

This article just felt too soft, too couched in internet fights, to really stand up.


> readers are asked to build up an idea of what attitudes software engineers supposedly have.

But they do, don't they. Discourse's founder's words are just very illustrative. Have you used the web recently? I have. It's bloated beyond any imagination to the point that Google now says that 2.4 seconds to Largest Contentful Paint is fast now: https://blog.chromium.org/2020/05/the-science-behind-web-vit... (this is from 4 years ago, it's probably worse now).

You don't have to go far to see either Youtube loading 2.5 megabytes of CSS on desktop to the founder of Vercel boasting its super fast sites that take 20 seconds to load the moment you throttle it just a tiny bit: https://x.com/dmitriid/status/1735338533303259571


You're making the same mistake the post did. It depends on the reader already having sympathy for the idea that bloat is bad in order to make its case. I can read nerd site comments all day that lament bloat. For an article to stand on its own on this point it has to make the case to people who don't already believe this.

Dan's articles have usually been very good at that. The keyboard latency one for example makes few assumptions and mostly relies on data to tell its story. My point is that this article is different. It's an elevated rant. It relies on an audience that already agrees to land its point, hence my criticism that it's too couched in internet fights.


State your case that bloat is good. I currently have a client who will do literally anything except delete a single javascript library so I'd like to understand them better.



The web doesn't scale like desktops - not even close.

Furthermore - this philosophy has made Windows worse and less responsive in all cases.

I understand that this "pays the bills" but my charge is (currently) to make things faster so I am against slowness.


The latest version of Excel loads faster on my laptop than most websites do. I’ve timed this.

I can load the entire MS Office suite and open a Visual Studio 2022 project in less time then it takes to open a blank Jira web form.

What’s your point?


Due to prevalence of native apps in the macOS world, the difference are often stark. I use Things and Bear, and it’s fast, then try to load gmail (dump account, so it’s not in Mail) and it’s so slow. Youtube too. Fastmail, in comparison, loads like it’s on localhost.

You block JavaScript and the amount of sites that is broken is ridiculous, some you would not expect (websites, not fullblown interactive apps).


My point is to reply to "State your case that bloat is good" with a famous blog stating a case that bloat is good. Bloat makes the company more money by allowing them to develop and ship faster, bloat makes the company more money by being able to offer more features to more customers (including the advertisers and marketers and etc. side of things), and - well, read the article.

I, too, dislike slow websites and web apps, but I don't think they are some mystery - natural selection isn't selecting for idiot developers, market selection is selecting for tickbox features and with first-mover-advantage they are selecting against "fast but not available for another year and has fewer features and cost more to develop".


That was 2001.

Core frequencies aren't going up at 2001 rates anymore. (And although Moore's law has continued, it is only just. Core freqs have all but topped out, it feels like.) Memory prices seem to have stalled, and even non-volatile storage feels like it's stalled.

My computer in 1998, compared to it's predecessor, storage was going up in size at ~43% YoY. It was an amazing time to be alive; the 128 MiB thumbdrive I bought the next decade is laughable now, but it was an upgrade from a 1.44 "MB" diskette. Today, I'm not sure I'd put more storage in a new machine than what I put in a 2011 build. E.g., 1 TiB seems to be ~$50; cheaper, yet. Using the late 90s growth rates, it should be 17 TiB… so even though it's about half the price, we can see we've fallen off the curve.


> "And although Moore's law has continued, it is only just."

https://en.wikipedia.org/wiki/Transistor_count has a table of transistor count over time. 2001 was Intel Pentium III with 45 million transistors and nVidia NV2A GPU with 60 million. 2023 has Apple M2 Ultra with 134 billion transistors and AMD Instinct CPU with 146 billion, and AMD Aqua Vanjaram CDNA3 GPU with 153 billion. That's some ~3,000x more, about a doubling every two years.

Core frequencies aren't going up, but amount of work per clock cycle is - SIMD instructions are up, memory access and peripheral access bandwidth is up, cache sizes are up, branch predictors are better, multi-core is better.

> "E.g., 1 TiB seems to be ~$50"

You can get a 12TB HDD from NewEgg for $99.99, Joel's blog said $0.0071 per megabyte and this is $0.0000083 per megabyte, ten thousand times cheaper in 23 years. Even after switching to more expensive SSDs 1TB for $50 is $0.00005 per megabyte, a hundred times cheaper than Joel mentioned - and that switch to SSDs likely reduced the investment in HDD tech. And as you say "I'm not sure I'd put more storage in a new machine than what I put in a 2011 build" few people need more storage unless they are video or gaming enthusiasts, or companies.


> has a table of transistor count over time

My comment explicitly notes this, and that I am not debating that transistor counts have continued to follow Moore's Law. They have. That's not the point.

> Core frequencies aren't going up, but amount of work per clock cycle is

[Citation needed]; this absolutely doesn't match my experience at all.

> You can get a 12TB HDD from NewEgg for $99.99

I looked at NewEgg specifically before I made that comment. (But for the pricing for 1 TiB, as that was comparable.) 12 TiB runs $250–400, with the absolute lowest priced¹ 12 TiB (internal desktop form factor) HDD being $201. So no, you cannot.

¹and the "features" of this "12 TB" HDD include "14TB per drive for 40% more petabytes per rack" (wat) "Highest 14TB hard drive performance" (wat)


The reason we have bloat is it's easier to satisfy stakeholders if you don't give a damn. There's really no reason to discuss this at all once you realize this.

But of course, ranting and reading rants is satisfying in its own right. What's the problem?


I think the article makes a pretty good case for bloat being bad for low-end users actually. His analysis demonstrates how many websites become genuinely unusable on cheaper devices, not just to techie standards, but for anyone trying to actually interact with the page at all.


The article you diss has actual benchmarks in it. The article I linked has actual numbers in it.

At this point you're willingly ignoring it because you dislike that this is additionally illustrated by quotes from specific people.


Usually the directive "don't worry about bloat" comes from above, or outside, the software engineering team. I'm a software engineer and I would love to fix performance problems so that everything runs Amiga smooth. But that takes time and effort to find, analyze, and fix performance issues... and once The Business sees something in more or less working order, implementing the next feature takes priority over removing bloat. "Premature optimization is the root of all evil" and that. I know that's not what Knuth meant, he meant don't be penny-wise and pound-foolish when you do optimize. But much like "GO TO considered harmful", something approaching the stupidest possible interpretation of the maxim has become the canonical interpretation.

And that's before getting into when The Business wants that sweet, sweet analytics data, or those sweet, sweet ad dollars.


> what attitudes software engineers supposedly have

I don’t think I’ve ever seen a company take performance seriously. No one scoffs when a simple API service for frontend has 500ms response time! How many engineers even know or care how much their cloud bill is?


I'm sure Google invests a lot of resources in making Google Search load fast. AFAIK they serve a specialized version for each user agent out there.


One the best counter examples to the rule. I tried running Lighthouse on a few Google services that are less prominent and had a few good laughs.


Knuth is kinda right imo - parallelism as we have it now is unused by 90% of software outside of specialist use cases and running the same single-threaded program on multiple data items.

Programming languages and hardware both offer poor support for fine-grained parallelism and it's very hard to speed up classical software using parallel approaches.


This felt more true a decade ago but there’s been a lot of improvement in both languages (e.g. Rust) and libraries - I routinely see most of the cores on my Macs fully loaded for things like working with media files which used to be single-threaded.


Languages are getting there but based on how much heartburn Rust's async causes, I still find it to be a very hard problem to give programmers a real multicore abstraction. And it starts from the ground up with operating systems designed around single core ideas.

When I first read about the multi core vs single core debate in college, I thought it was silly and that multi core would be just fine. But over the years I've developed a more nuanced view. The average user is mostly doing things that involve IO, so that makes multi core useful in that you can use multiple cores to wait for IO, but for actual computation single core performance continues to be the bottleneck.


async isn’t the only option: threading scales well and is often much easier to write and make production-ready (e.g. reasoning about peak memory usage can be hard in an async model).


> ...which is a long standing point of academic contention.

What contention? If anything, Luu is being rather generous–Knuth was just whining that the decades-long free lunch program was being cancelled.


VLIW (Itanium is a VLIW arch) is what's contentious, not multiprocessing.


OK I missed that. Thanks. But it looks like Itanium was only tangential to this discussion, in that Knuth thinks multicore programming may be an even worse mistake than Itanium.


I thought he summarised it pretty well. Jeff Atwood was only picked as example. But there are LOTS of high profile, huge followers web developments thought leaders constantly pump out similar views. And a lot of their followers just blindly accept what they were told.


Every company stopped caring, especially the companies who were at the forefront of standards and good web design practices, like Google and Apple.

Google recently retired their HTML Gmail version, mind you, it still worked on a 2008 256MO RAM Android phone with an old Firefox version and it was simply fast... of course the new JS bloated version doesn't, it just kills the browser. That's an extreme example, yet low budget Phones have 2GB of RAM, you simply cannot browser the web with these and expect reasonable performances anymore.

Mobile web sucks, an it's done on purpose, to push people to use "native" apps which makes things easier when it comes to data collection and ad display for companies such as Apple and Google.


"Mobile web sucks, an it's done on purpose, to push people to use "native" apps which makes things easier when it comes to data collection and ad display for companies such as Apple and Google."

Partly for sure, but Amazon for example? Or Decathlon? (a big sports/outdoor chain in europe)

Their sites are just horrible on a mobile (or in Decathlons case also on a Desktop, that is not high end), but they also don't offer me their app in plain view, so I have to assume it is just incompetence. The devs only testing everything on their high end devices connected to a backbone.


> but Amazon for example? Or Decathlon? (a big sports/outdoor chain in europe)

Pretty much every company out there employs oxygen wasters who need "engagement" to justify their promotions/salaries. They don't care whether said "engagement" translates to actual profit.

If bloating the page or adding some annoying cookie banner allows them to come up with some random number that goes up (no matter whether the measurement is even correct) they'll happily take that opportunity even if would cause actual profits to go down.


Yes, on Thursday Google ended their only viable "product".

RIP Google.

The new Reddit is unusable, and the old is well too old.

Twitch is borderline usable, with chat and video stream problems...

The list is long...

All changes are bad when you have the final formula because they are job security.

Eventually the monkeys on this ball of dirt will realize that jobs and money don't exist, but then it will be to late... oh that is now!

RIP Humans.


> the old is well too old

What's wrong with the old Reddit UI?


It has usability problems with f.ex. collapsing a comment tree.

I returned to it last major reskin too but then they fixed the new to become usable.

Now they removed the middle version... they should have made recent.reddit.com for those that want to wait until new.reddit.com doesn't suck as much.


Using https://www.mcmaster.com/ makes me wish I were a hardware engineer. Makes every other e-commerce site feel like garbage. If amazon were this fast, I’d be broke within days. Why haven’t other sites figured this out?


As a hobbyist, I cannot justify the cost of McMaster. I will confess that I often use it to find the precise name of a part for purchasing on Amazon/AliExpress.

Maybe a quality service really does cost that much? But the gap in performances and usability is so great, it seems that something else must be at play sometimes.


wow! that was a refreshing experience

> Why haven’t other sites figured this out?

i suspect that most ppl cannot appreciate efficiency


[flagged]


If you are upset that I posted obv GPT output, then here is a reason why you're failing at your disgruntlement:

I am able to post my prompt in whatever crazy alarmist fashion I would like and massage it into producing an output that I agree with, represents my arguments and doesnt inflame or trigger...

aside from the 12 monkeys anti GPT reponse, because they couldnt imagine that it not a generic GPT-barf, as opposed to the machinations of the OP to get a more salient point without crossing lines....

oops..


WHO THE FUCK IS DOWNVOTING THIS?

@Dang - are you beholden to trolls now?

Jeauses.. asking who consumes bandwidth is not "dont criticise the bandwidth providers"?))

SERIOUS

Show me every sing account that downvoted this.

I gave you content, I uploaded an opinon for which you monetize - yet you take censorship on my provided content as well as revenue.

Show me.


Where "users with slow devices" equals "anyone trying to keep hardware running more than a few years", it seems. It's enforced obsolescence.

I've said for a long time, devs should be forced to take a survey of their users' hardware, and then themselves use the slowest common system, say, the 5th-percentile, one day a week. If they don't care about efficiency now, maybe they will when it's sufficiently painful.


I have an iPhone 11 Pro. It is about 4.5 years old and still runs great. My guess is I can get another 1.5 to 2.5 years out of it.

My point here is not every phone which is more than a few years old stops working or cannot browse web sites.


thing is, boss is not a dev. He is a business man.


> Surely, for example, multiple processors are no help to TeX

But TeX was designed to run on a single CPU-core, so no surprise here. I wonder what TeX could become if all Knuth had at the time a multicore machine with cores managing maybe 0.1 MIPS each (or even lower). Like what the world would become if we lived in a counterfactual world where Intel and its buddies starting in 1970s boosted not the frequency and instruction per second per core but number of cores?

My take we'd switched to functional-style programming at 1980s with immutable data, created tools to describe multistage pipelines with each stage issuing tasks into a queue, while cores concurrently picking tasks from the queue. TeX would probably have a simplified and extra fast parser that could cut input into chunks to feed them into a fullblown and slow parser which would be a first stage of a pipeline, and then these pipelines somehow would converge into an output stream. TeX probably would prefer to use more of lexical scoping, to reduce interaction between chunks, or maybe it would make some kind of a barrier for pipelines where they all stop and wait for propagation of things like `\it` from its occurrence to the end.

This counterfactual world seems much more exciting to me than the real one, though maybe I wouldn't be excited if I lived there.


I assumed that to mean the layout work is limited to a single thread. You need to know what content made it onto page one before you can start working on page two, right?


There's also a huge tendency to design for fast, high quality connectivity. Try using any Google product on airplane wifi. Even just chat loads in minutes-to-never and frequently keels over dead, forcing an outrageously expensive reload. Docs? Good luck.

I wish software engineers cared to test in less than ideal conditions. Low speeds, intermittent connectivity, and packet loss are real.


I call this "Designed in California" like some fruity company proudly says on their devices.

For software this means designed on top of the line hardware, with fast low latency internet. TFA describes the consequences.

For hardware it means designed inside climate controlled dust free offices and cars for people with long commutes to work on straight roads where you don't have to pay much attention.

Think phones shutting down if you have a real winter. Think smart turn stalks that can't signal a left turn on a crossroads that's not at 90 degrees. Think ultra thin laptops where the keyboard is so dust sensitive it lasts 3 months if you use them outdoors. Think a focus on audiobooks and podcasts because you're stuck in traffic so much.


What I find interesting is that design of websites is often 'mobile first' but rarely 'mobile connection first'.


The last decade of my life has been a speedrun in "less than ideal conditions" for computing. CGNAT, 5mbit dsl, spotty "fixed wireless" and my latest debacle: starlink, although that seems to be getting better slowly; used to drop 15/60 seconds, now it drops more like 4/200 seconds. Constant power issues and lightning strikes - i only have 1 computer that has a working NIC, because evidently tiny power fluctuations are enough to send most chipsets into the graveyard. I had to switch to full fiber between all compute sites on my property, and a wifi backup, because copper is too risky.


Do you have earth return on your power?


Yes, and it works, too. But i have outbuildings with servers and networking gear in them and metal conduit between buildings on/underground. Voltage potentials don't care, if there's a wet extension cord or something that's a less resistive path to start flowing and some gear is on that circuit or adjacent, it'll go.

Overall switching to fiber is cheaper than aggressive lightning protection, and i moved all the network gear to a commercial UPS, and the interconnect between the "modems" and the switches is media converted to fiber for 3 feet. any time i have to run networking further than 6' or so i run fiber and put a media converter or a single gbic switch there. I'm hoping i futureproofed enough to upgrade to 10gbit in a year or so. My backup NAS has 10gbit but nothing else is connected at that speed yet.

edit: One time lightning hit a pine tree in the back of the house, and it used my dipole antenna to reach a tree 80' away, and apparently there was an extension cable near there, which went back into the house, and it went all the way around the house, to reach the telco CPE box where DSL lived. the telco box and my mains earth are roughly 1 meter apart. That surge took out my main desktop computer, a washing machine (singed the dryer where it arced between it and the washer), the toaster oven, a microwave, my NAS, and my router connected to telco. It went two different paths inside the house, along both outside walls, one via mains copper and the other via cat5e copper. That was quite an expensive misadventure.


Developers are expensive, so we give them fast connections and fast computers. Then we act shocked when modern software/web requires fast computers.

Unless it's somehow regulated that people test less than ideal conditions it won't happen, yet most people (myself included) don't really want that either.


I live in some hills and some days I need to fully drive out of them to get google maps to load the map. The map I am using half a gb to cache locally on my phone already. Whats even the point of that cache? Same thing with spotify. Why is there latency searching my downloads library in offline mode?


I often use a Thinkpad X220 (which still works for a lot of my usage and I'm not too concerned about it being stolen or damaged) and the JS web is terrible to use on it. Mostly resulted in my preference of using native software (non-electron), which generally works perfectly fine and about as well as on my "more modern" computer.


Whenever I pull out old machines I’m a little shocked at how responsive they are running a modern OS (Win10 or Linux), so long as the modern web is avoided. Anything with a Core 2 Duo or better is adequate for a wide range of tasks if you can find non-bloated software to do them with.

Even going back so far that modern OS support is absent, snappiness can be found. My circa 2000 500Mhz PowerBook G3 running Mac OS 9.1 doesn’t feel appreciably slower than its modern day counterpart for more than one might expect, and some things like typing latency are actually better.


“True UNIX way” solution to this would be getting the data from the Web non-interactively and redirecting it into some regular expressions to produce the only thing you want. Random example:

https://github.com/l29ah/w3crapcli


A Core Duo it's perfectly fine with an ad blocker:

git://bitreich.org/privacy-haters


My 12" Macbook was my main computer for 2022 and part of 2023. It ran smoothly for my workflow, even with a 4k monitor.

However YouTube and Gmail brought it to a crawl. I had to sell it because Youtube Music slowed down my work.


I have a mac mini 2011 and it works great with Linux Mint. But load youtube and you’re in a world of pain.


I remember going trough a similar situation when using a netbook. At first they were ok for doing light work and even accessing websites, but as time went on websites and browsers became more and more heavy. Youtube was a struggle, even Google felt laggy. Want to browse a map? You are better off getting a physical one! But, no worry, it was still fine for other low intensity things and some programming projects I worked on. About two years later and both KDE and GNOME would struggle to run on it, it was painful. Maybe I should have switched to an all CLI/terminal workflow but eventually I bought a used thinkpad X220 which was like taking a breath of fresh air after holding it for years. But now I do see the same pattern emerging, much slower mind you, but it is surely happening. Some websites feel sluggish, some gnome apps also feel sluggish and I have to avoid electron apps like the plague. But at least it has enough brawn (16GB of RAM and an SSD) to cut trough the bullshit and work ok on most things. Maybe I should have embraced that terminal lifestyle after all...


I'm sure there's an odd parable with netbooks, around the time they first started appearing as a hacky project and early commercial products they were lean and mean. Lightweight local software to do things online, compact flash IDE converters versus HDDs (which seems like a precursor to SSDs by proving a market), bare bones linux and there was a new wave of web standards and performance which non-IE browsers were leading in.

Then after going mass market OEMs put full windows and client software on there, and the web became heavier so webmail or simple office/collaboration slowed down. After that mobile/tablets were in competition for the market, and has practically devoured non-professional usage for PCs outside of gaming.

What I keep coming back to is bundling versus unbundling - having one tool to do everything with likely inevitable compromises, versus splitting into a number of precise specialized ones. It's difficult to convince any decent number of people to take something that does less.


If one cares about accessibility of a website to people with much slower devices, particularly living in less developed parts of the world, I guess there are more considerations:

- using more clear English with simple sentence structures should make the content more accessible to people who don’t read English with the fluency of an educated American

- reducing the number of requests required to load a page as latency may be high (and latency to the nearest e.g. cloudflare edge node may still be high)


> reducing the number of requests required to load a page

In practice this pretty much requires pure SSR and "multiple page" design, given the amount of network roundtrips on typical SPA sites. (Some lightweight SPA updates may nonetheless be feasible, by using an efficient HTML-swapping approach as seen in HTMX as opposed to the conventional chatty-API requests and heavy DOM manipulation.)


It's a shame that NodeBB was not included in the list of forums tested.

We worked really hard to optimize our forum load times, and it handedly beats the pants off of much we've tested against.

But that's not much of a brag, the bar is quite low.

Dan goes on and lambasts (rightfully so) Atwood for deriding Qualcomm and assuming slow phones don't exist.

Well, let's chat, and talk to someone whose team really does dogfood their products on slower devices...


Well, to Dan's Credit, he kindly re-ran a subset of the tests for NodeBB:

https://community.nodebb.org/post/98597


These sites can and should be much better. Yes. Definitely.

At the same time, while a 10s load time is a long time & unpleasant, it doesn't seem catastrophic yet.

The more vital question to me is what the experience is like after the page is loaded. I'm sure a number of these sites have similarly terrible architecture & ads bogging down the experience. But I also expect that some of those which took a while to load are pretty snappy & fast after loading.

Native apps probably have plenty of truly user-insulting payloads they too chug through as they load, and no shortage of poor architectural decisions. On the web it's much much easier to see all the bad; a view source away. And there is seemingly less discipline on the web, more terrible and terribly inefficient cases of companies with too many people throwing whatever the heck into Google Tag Manager or other similar offenses.

The latest server-side react stuff seems like it has a lot of help to offer, but there's still a lot of questions about rehydration of the page. I'm also lament see us shift away from the thick-client world; so much power has been embued to the users from the web 9.9 times out of 10 just being some restful services we can hack with. In all, I think there's a deficiency in broad architectural patterns for how the thick client should manage it's data, and a really issue with ahead-of-time bundles versus just-in-time & load behind code loading that we have failed to make much headway on in the past decade, and this lack is where the real wins are.


Yeah this is exactly the kind of nuance I'd love to see explored but as you say, auditing native apps is difficult, and it's really hard to compare apples to apples unless you can really compare equivalent web and mobile apps.


>Many pages actually remove the parts of the page you scrolled past as you scroll

There is a special place in hell for every web developer who does that.


It’s a performance optimization for rendering a large amount of html. If the DOM had all the items in memory it would perform much worse. Thankfully browsers are working on a feature where you can keep the markup in the DOM for things like CTRL-F without hurting performance.

Granted the main reason such a technique is needed is designs that avoid pagination.


We had web pages with big lists and tables in the DOM 20+ years ago, they were fine. The difference is that now we use web frameworks that do work proportional to DOM size many times per second.


Call me a conspiracy theorist, but I think it's all a plane to make it harder for people to save stuff. If the content just stays there after you loaded, you could just save the page as HTML and, if there wasn't a lot of javascript shenanigans, it should save it okay. When you add this element, this doesn't work anymore. I'm pretty sure instagram, for example, does that with the intention of making it harder for people to save profiles.


I am usually just a backend developer, but for a little reporting application that I built, I couldn't get the UI team to do a UI in the short time that I had to build it, so I had it output some basic HTML. About 10000 list items. Rendered imperceptibly fast on my browser.

Then because of $mandate, the report was moved to the team's standard React UI frontend. Now it takes 5 seconds to load and only gives you like 100 items at a time, so Ctrl-F is broken. Also, filter dropdowns somehow did not work until they fixed it, so it appears like the select tag was not fit for their design and they rolled their own.


Pagination is part of it but most of it is simply using inefficient JavaScript frameworks and not learning how the DOM or CSS really work. The maximum number of elements you could have on a phone a decade ago was measured in hundreds of thousands (median) or millions (iPhone), and the hardware & browsers have gotten faster. The problem is that you have had a lot of very turnaround focused developers shoveling out code which uses deeply nested elements, poorly-scoped event handlers, etc. which mean that too many DOM elements are created, unnecessarily touched during updates, and the overall complexity means that on the off chance someone opens their browser’s profiler they won’t see a single obvious problem and will likely proclaim it unfixable and say something like “the DOM is slow”.


What is the feature called?


Its content-visibility. It’s already in Chrome but not Firefox or Safari: https://caniuse.com/css-content-visibility


Not only the user is affected by this.

The difference between a 2MB and a 150KB CSS file can be a lot of bandwidth.

The difference between a bad and good framework can be a lot of CPU power and RAM.

Companies pay for this. But I guess most have no clue that these costs can be reduced.

And some companies just don't care as long as money is coming in.


A lot of companies don't care about end user performance experience. Companies will burden issued PCs with bloated anti-virus, endpoint monitoring, TLS interception, Microsoft Teams, etc. If there's no explicit responsiveness goal, then performance dies by a thousand cuts.


>Companies will burden issued PCs with bloated anti-virus,

Ugh, bane of my day job. I work with two companies in particular that have high security requirements in their environments and very similar total workloads with our software. One spends around $250k (ish) a year in self hosting costs, the other over a million to get the same throughput. The less costly one worked with us as a vendor to get anti-virus/endpoint exclusions on the file io intensive part of our application and put anti-virus scanning before that point, then harden those machines in other ways. The other customer is "policy demands we scan everything everywhere and the policy is iron law".


Worst is, nowadays such bloated "security" software is being forced onto Linux servers too... every time I check why something feels slow, Microsoft Defender is hogging resources.


It's a numbers game. Mostly the difference doesn't matter at all to the vast majority of users. Optimizing for the bottom 1 or 2 percent that don't have any disposable income to update their phones, or pay for your wonderful products or services is not a big priority. And not all companies have rockstar developers working for them. That's why things like wordpress are so popular.

I actually pulled the plug on a wordpress site for my company last week. We now have a static website. It's a big performance improvement. But the old site was adequate even though it was a bit slow to load. So, nobody really noticed the improvement. Making it faster was never a requirement.

What is worth optimizing for is good SEO. There's of course a correlation between responsiveness and people giving up and abandoning web sites. That's why big e-commerce sites tend to be relatively fast. Because there's a money impact when people leave early.

What I find ironic is that the people complaining about this stuff are mostly relatively well off developers with disposable incomes and decent hardware. If they use crappy/obsolete hardware it's mostly by choice; not necessity. Some people are a bit OCD about performance issues as well. They notice minor stutters that nobody cares about and it ticks them off.

2MB is nothing. I'm saying this as somebody who used cassettes, and later floppy disks with way less capacity. But that's 35 years ago. The only time when this matters to me is when I'm on a train in Germany and my phone is on a really flaky mobile network that barely works. Germany is a bit of a third world country when it comes to mobile connectivity. So, that's annoying. But not really a problem web developers should concern themselves with.


Eh. Cloudfront pricing starts at 8.5c per GB and goes down to 2c. I think you’d struggle to use that pricing as a justification when compared to the software engineer hours required to shrink down a CSS bundle. (don’t get me wrong, 2MB is insane and ought to be a professional embarrassment. But I think you’re going to struggle using bandwidth bills as the reason)

I agree with you about frameworks, though. So much waste in creating everything as (e.g.) a React app when there’s no need. Sadly the industry heavily prioritises developer experience over user experience.


This, although I often feel near modern web frameworks (React, similar) do not provide better developer experience.


If you don't have a good phone and a high speed connection, you don't have any money to spend on either the sites products or the products of their advertisers.

When looked at from that angle, bloat is a feature.

It's not reasonable to have an expectation of quality when it comes to the web.


Well that take sure goes from 0 to 60 real fast. Can you really be sure that only people with good phones and connections have money to spend? Just to poke some obvious holes: what about old rich people who have a distaste for modern phones but spend lavishly on vacations every year? Or outdoorsy rich people who are frequently in areas with poor cell coverage but are constantly purchasing expensive camping/climbing equipment? How about people who aren’t rich, but work for companies where their input is part of a purchasing process with millions of dollars of budget? Those people are all super-lucrative advertising targets, I don’t think advertisers are intentionally weeding them out.


I think you are close to the truth there.

But I doubt companies purposely increase their hosting costs as some kind of firewall to only include the rich. More like they just don’t care. Same reason for technical debt, everyone wants to grow and move needles.

If a company could magically make their site more available and efficient for free I am sure they would jump at the chance. But spending a million on that vs. a million on ads wont seem worth it.


Ah, the modern AAA games take on MTX. Who cares about gamers, fish for whales.


This is addressed in TFA and is not true. The bloat is a symptom of what I've seen referred to as the "laptop class" and is unrelated to any feature adjacent.


Virtually all pharmaceutical advertising is targeted at prescribers, yet we all have to watch/view them.


That's mostly an American thing.


As an american by accident, i apologize, you're right. More civilized countries have outlawed that sort of advertising.


Huh? I have a 5 year old, mid range android, and I still buy things online.

Not everyone cares about phones.


Also, there are some websites targeting users with little money as well.


I feel like there's a good point made by the Discourse CEO about Qualcomm (and competitors) - the product decision to segment their CPU line by drastic differences in single-threaded CPU perf is a highly anti-consumer one.

In contrast AMD and Intel use the same (or sameish) CPU arch in all of their lineup in a given generation, the absolute cheapest laptop I could find used a Pentium 6805, which still has a GB6 score of well over 1000, sold in a laptop that's cheaper than most budget smartphones.

In contrast, Qualcomm and Mediatek will sell you SoCs that don't even have half of that performance as a latest-gen 'midrange' part.


It’s not just slow devices, it’s also any time you have any kind of weak connectivity.

I think every OS now has tools to let you simulate shitty network performance these days so it’s inexcusable that so many sites and even native apps fail so badly anytime you have anything less than a mbit connection or greater than 50ms latency :-/


It’s not just weak connectivity. I know people in rural areas who still have less than 1 Mbps internet speed over their DSL landline. Using the internet there isn’t a lot of fun.


Which is absurd when you think that the internet used to be usable on 14.4k modems.

I remember having to plan to take up hours of time on our phone line to download giant files that were smaller than many basic webpages these days (ignoring things like photos where there's obviously a basic size/quality tradeoff + more pixels)


When i first moved to where i live now DSL had a waitlist, so i tried both a verizon hotspot (myfi!) and dialup. Dialup with HTML gmail (for slow connections!) took minutes to load. IRC was completely usable, but hangouts was not. danluu's website would have loaded just fine, as an example. I just remembered that after getting DSL if more than one person decided to watch a youtube video the pings went up in the 1000ms range.


Yes, 1 Mbps was actually high-speed internet 25 years ago.


How web bloat impacts users: negatively. Better do your best to fix it.

This stuff is simpler than we let it be sometimes, folks.


> This stuff is simpler than we let it be sometimes, folks.

Meanwhile watches a team build a cathedral when all they needed was a shack.


Why not to build a cathedral if someone else is paying?

I've never seen companies where developers are rewarded for performance improvement or any kind of improvement. Did an improvement? Nice! Good job! And that's it.


The point is, you build a shack so you can build a cathedral where it’s warranted. If you are stuck maintaining a cathedral you can’t move on to bigger better things.


Yep, you build a shack and charge the users the price of a starbucks latte per month because "it's just a starbucks latte".

Then you wonder why the solo founder saas has no customers.


I'm not sure what you're talking about; but what I was trying to say is I tend to see teams get charged with building X (which should be a cathedral), but then build a cathedral of configuration parsing, and a cathedral of CRUD; instead of focusing on X.


I like my job security large, ornate and full of stained glass.


> Something I've observed over time, as programming has become more prestigious and more lucrative, is that people have tended to come from wealthier backgrounds and have less exposure to people with different income levels. An example we've discussed before, is at a well-known, prestigious, startup that has a very left-leaning employee base, where everyone got rich, on a discussion about the covid stimulus checks, in a slack discussion, a well meaning progressive employee said that it was pointless because people would just use their stimulus checks to buy stock. This person had, apparently, never talked to any middle-class (let alone poor) person about where their money goes or looked at the data on who owns equity. And that's just looking at American wealth. When we look at world-wide wealth, the general level of understanding is much lower. People seem to really underestimate the dynamic range in wealth and income across the world.

Perhaps the falling salaries for programming in the US could be a good thing in that regard. So many people get into this career because they want to make it big, which seems to drive down the quality of the talent pool.


Relating to the aside about opportunities in different countries: the comparison between potential programming career prospects between a poor American and middle class Pole feels reasonable for someone born around the same time as the OP (early ’80s I guess) but I suspect it’s since shifted in Poland’s failure.

I think the relative disadvantages of a poor American compared to their wealthier peers have increased as there’s more competition (as the degree is seen as more desirable by motivated wealthy parents) and the poor student likely won’t even have a non-phone computer at home where all their wealthier peers probably will. Possibly they could work around the competitiveness of computer science by going via some less well-trodden path (eg mathematics or physics) except that university admission isn’t by major. They may also be disadvantaged by later classism in hiring. Meanwhile a middle class Pole will have access to a computer and, provided they live sufficiently near one of the big cities, access to technical schools which can give them a head start on programming skills (and on competitive programming which is a useful skill for passing the current kind of programming interview questions). To get the kind of good outcome described in the OP, they then need to get hired somewhere like Google in Zurich (somewhat similar difficulty to in the US except the earlier stages were easier (in the sense of being more probable) for the hypothetical Pole) and progress from there (maybe impeded by initially not being at the headquarters / fewer other employment opportunities to get career advancement by changing jobs). Class will be less of a problem as the hypothetical middle class pole isn’t so different in wealth from other middle class Europeans and you get much less strong class-selection than when (e.g.) Americans are hiring Americans.


When websites pack in too many high-res images, videos, and complex scripts, it’s like they’re trying to cram that overstuffed suitcase into a tiny space. Your device is struggling, man. It’s like it’s running a marathon with a backpack full of bricks.

So, what happens? Your device slows down to a crawl, pages take forever to load, and sometimes, it just gives up and crashes. It’s like being stuck in traffic when you’re already late for work. And let’s not even talk about the data usage. It’s like your phone’s eating through your data plan like it’s an all-you-can-eat buffet.

Now, if you’re on the latest and greatest tech, you might not notice much. But for folks with older devices or slower connections, it’s a real pain. It’s like everyone else is zooming by on a high-speed train while you’re chugging along on a steam engine.

So, what can we do? Well, we can start by being mindful of what we put on our websites. Keep it lean, mean, and clean, folks. Your users will thank you, and their devices will too. And hey, maybe we’ll all get where we’re going a little faster.


Maybe we'll see the return of the proxy + lightweight browser model like Opera Mini.


And lightweight APPs, one tap to load all...


> Another example is Wordpress (old) vs. newer, trendier, blogging platforms like Medium and Substack. Wordpress (old) is 17.5x / 10x faster (LCP* / CPU) than Medium and 5x / 7x faster (LCP* / CPU) faster than Substack on our M3 Max ...

It's a persistent complaint among readers of SlateStarCodex (a blog which made a high-profile move to Substack from an old WordPress site). Substack attributes the sluggishness to the owner's special request to show all comments by default, but the old WordPress blog loads all comments by default and was fine even on older devices.

https://www.reddit.com/r/slatestarcodex/comments/16xsr8w/sub...

https://www.reddit.com/r/slatestarcodex/comments/1b9p55g/any...


He mentions Substack, which is maybe the most egregious example of bloat I regularly encounter. Like I cannot open Scott Alexander's blog on my phone because it comes to a crawl.

But the Substack devs are aware of this. [They know it's a problem](https://old.reddit.com/r/slatestarcodex/comments/16xsr8w/sub...).

>I'm much more of a backend person, so take this with somewhat of a grain of salt, but I believe the issue is with how we're using react. It's not necessarily the amount of content, but something about the number of components we use does not play nicely with rendering content at ACX scale.

>As for why it takes up CPU after rendering, my understanding is that since each of the components is monitoring state changes to figure out how to re-render, it continues to eat up CPU.

They know—but they do nothing to fix it. It's just an impossibility, rendering all those comments.


>Substack

I don't access this site a lot, but I remember until very recently they had other front-end, it worked great. Honestly, I think they will follow the path of medium.com, and start to make the user experience worse and worse.

It's a site where people post text, a few images, maybe 1 or 2 videos per post. It shouldn't be complicated.


Nobody cares about people with older devices. We've shifted to a mode where companies tell their customers what they have to do, and if they don't fit the mold they are dropped. It's more profitable that way - you scale only revenue and don't have to worry about accessibility or customer service or any edge cases. That's what big tech has gotten for us.


Clues that a market is not competitive...

What most impresses me is that this happen on many markets that should be competitive by any sane rationale. Like group buying or hotel booking. Yet, they also do that kind of shit, and people still have nowhere to go.

The world economy became integrated and incredibly rigid.


You’re getting downvoted but I think despite the tone you are correct. 10 years ago corporate guidance on web dev was backwards compatibility going back several versions. Now it’s hardly any concern for anything more than 6 months old.

More than anything I think it’s because corporate IT has had to modernize due to security. Security now wants you to update constantly instead of running old vetted software. You also cannot demand user use an old version of a browser that still supports some old plugin. And as a vendor it’s not profitable to support people who maintain that mindset.

Also “update to the latest version” is the new “turn it off and back on again,” when it comes to basic IT help.


Who is this mythical end-user with an old browser? Because they don’t show up in browser usage statistics.

https://gs.statcounter.com/browser-version-market-share

Chrome is evergreen, even on Android. Safari, after a bit of a fallow period, is updated fairly aggressively, and though it’s still coupled with OS updates, it’s no longer married to the annual x.0 releases.

Mind you, I still believe, and practice, you should write semantic HTML with progressive enhancement. But at the same time, I absolutely do not think you should go out of your way to test for some ancient version of Safari running on a first-generation iPad Pro—use basic webdev best practices, and don’t spend time worrying that container queries aren’t going to work for that sliver of the market.


Browsers may be self-updating but hardware is not. You can't just download more RAM or a faster CPU.


Most people auto update their software or they don’t at all. What they don’t do is buy a brand new laptop as soon as it’s out. And the one they have is a cheap one from HP or Dell. To know their pain, try to use one of these.


I've got an iPad Air 2 running iOS 15.8. My user agent will surely tell you I'm only one or two major versions behind the "latest and greatest" but the hardware itself is a different story. On this device modern GitHub consistently crashes when displaying more than a few hundred lines of code. I've lost the ability to use a perfectly functioning device due to bloatware.


Exactly. The landscape has changed because those old browser users have been forced to update.


Because the people with money who are buying your products are all running the latest version of iOS. The ones on a 6 year old Android version are not spending anything therefor it isn't worth investing money in making sure it works for them.


Part of it was that users were terrible at updating browsers. You needed to support Internet Explorer 6, or cut off a third of your customers. It sucked.

Now every browser gets updates, automatically and aggressively. The only real outlier is Safari, but even that updates way quicker than older browsers used to.

As a result, who needs backward compatibility?


Without all the compatibility shims, it means that you can drop coat bloat sooner when the JS gets replaced with a native browser capability.


Some years ago I tested real world web sites, turned out only about 30% of the javascript they load was actually invoked by the user's browser (even for sites optimied with Closure Compiler, that has some dead code elimination):

https://github.com/avodonosov/pocl

The unused javascript code can be removed (and loaded on demand). Although I am not sure how valuable that would be for the world. It only saves network traffic, parsing time and some browser memory for compiled code. But js traffic in the Internet is neglidgible comparing to, say, video and images. Will the user experience be signifiqanty better if browser is the saved from the unnesessary js parsing? I don't know of a good way to measure that.


I'm glad people remember what WW in WWW means. :)

It makes me very sad to see that reddit's new design is so heavy it can't even be accessed by part of the world. It's like parts of the internet are closing theirs doors just so they can have more sliding effects that nobody wants.

Or maybe I'm just a weird one who prefers my browser to do a full load when I click a link.

Btw there was a time everyone kept talking about "responsive" web design and, having used only low-end smartphones and tablets, I kept finding it weird that there was such focus on the design being responsive for mobile devices when those mobile devices were so extremely slow to respond to touch to begin with. Of course I know that's not what they meant, but it still felt weird.


> I'm glad people remember what WW in WWW means. :)

Welcome to the Wide Web, where bloat is the norm.


That exchange with Jeff Atwood makes me somewhat angry. It's one thing to be annoyed at a hardware vendor (justified or not), quite another to take it out on the users of said hardware.

And while I appreciate that engineers can often afford to be blunter than people in other disciplines, I also think that a founder of two successful companies should have a bit more restraint when posting. Writing "fuck company X" (unless we're talking about a grossly unethical company, maybe) just seems like very immature behaviour to me.


The most interesting part of this is the comments about software shifting from a normal career to a prestige target for wealthy families, and that this demographic shift has massive consequences on technology design and services.


I think it would be useful to separate data & code here. What if you kept the code the same, and downgraded the assets so the overall package is smaller/easier to process/execute? Or maybe tweaked the renderer so the same code & data can render quicker and slightly worse image quality consuming fewer CPU cycles? Basically I'm envisioning something like a game where the same game data+code can support multiple performance targets (except in this case the different CDN hookups to get the assets out, rather than everyone getting the bloated data download)


It's interesting research, but at the end of the day, the websites are there to make money. Well, looking at the table, maybe the author's own isn't, but the rest is. And so, I think the businesses don't optimize more because there isn't much more money to be made that way. Instead, the same effort is better spent elsewhere, like marketing, having a software that's quickly adaptable, that's easy to get interchangeable developers for. So they are optimized, just not for the speed on low-end devices. Different goals.


I was thinking about this the other day

I’d happily pay $100/month to access the internet similar to that of pre-2005ish

As in, banning almost all commercial activity

I truly believe Google isn’t getting worse, just the incentives behind the creation of web content have become progressively maligned with that of the user’s desires

I want a high quality internet, and am willing to fork out large sums of money to access it

I hope I’m not alone


Well, that's not the pre-2005 internet I remember. What I remember are popups, pop-unders, poisoned search results due to crude SEO tactics like including small background-colored text on websites, endless rings of web pages referring to each other, heavily blinking banners, and even the best ad blocker being so slow on the machine that it's no joke. And this is the commercial abuse only, there was a lot of other types going around.

I do despise many aspects of the current internet, but I think that it's the fallibility of man that poisons the nice things, and I don't think that it was ever too much different in this regard.

For a different internet, there are ways to go about it. I'm not sure how much of it you already know.

Millionshort provides alternative search results to queries. I think this is similar to what you're looking for, and it's free.

Alternative networks spring up from time to time, like the Gemini network. I'm not sure how much of a content desert they are, as I'm not a frequent user.

Generally if you hang around in free software / open source spaces, they have a lot of people with an alternative take on the modern things, including people taking part of an internet that's not mainstream, for example by excluding running any JavaScript. This can lead to other places, forums, and so on.

I wish you luck. But be prepared that the past is gone. Maybe never existed in the first place.


Dan, I respect you and I feel your pain, but...

> Another common attitude on display above is the idea that users who aren't wealthy don't matter.

If you want to make money, then this is the correct attitude. You need to target the users who have the means to be on the bleeding edge. It may not be "fair" or "equitable" or whatever, but catering to the masses is a suicide mission unless you have a lot of cash/time to burn.

This post reminds me of the standard Stallman quip "if everyone used the GPL, then our problems would be solved"


As someone who makes bloated sites I can only say that management doesn't give a fuck about bloat as long as features are checked of in due time. So please don't blame me


What about pride in your vocation?


Next try out the search engines.

Anecdotally, Google Search loads ~500ms faster than DuckDuckGo on the OG Pinephone.


That is one performance metric. What about energy use and loading search results not just the home page. I find DDG faster from a perception point of view. I imagine on sone metrics it is faster.


Sorry, should have been more precise. I was measuring loading search results. E.g.:

https://www.google.com/search?client=firefox-b-e&q=test9999

vs.

https://duckduckgo.com/?t=ftsa&q=test9999&ia=web


Did you measure time user needs to scroll and click reject google cookies?


I really wish he compared an m3 Mac to a 6 year old intel chip and not some random processor I’ve never seen or experienced that I’m not sure is even available in the usa


I can vouch that my 2017 MacBook Pro struggles with all kinds of tasks, especially web ones.


This is one of the reasons I've started building https://formpress.org. Seeing the bloat in many form builder apps/services, I've decided there is need for a lightweight and open source alternative.

How we achieve lightweightness? Currently our only sin is, our inclusion of jquery, that is just to have some cross browser way of interacting with DOM, then we hand craft required JS code based on features used in the form builder. We then ship a lightweight runtime, whose whole purpose is to load necessary JS code pieces to have a functional form that is lightning fast. Ps: we havent gone to the last mile in optimizations, but we definteley will. Even with current state, it is the most lightweight form builder out there.

It is open source, MIT licensed, built on modern stack(react, node.js, Kubernetes and Google Cloud) and we are also hosting a freemium version.

I think, there will be ever increasing need and market for lightweight products, as modern IT means a lot of products coming together. So each one should minimize their overhead.

Give our product a go and let us know what you think?


Is this new or old reddit being benched?

That would be an interesting direct comparison.


New Reddit, per the appendix. I think that Old Reddit is likely to be fairly competitive (I would guess placing near Wordpress), and yeah I agree it would be interesting to have in the table to see how far it's fallen.


A problem that recently started in Feb 2024 for me is probably unrelated to the topic, but close enough that I'm posting in the hopes someone has an idea of what is happening.

I am running on a relatively new Lenovo Legion (~ 18 months old) with 64kb of ram running windows 11. About 6 weeks ago I began getting the BSOD every time I streamed a live hockey game (I watch maybe 3 games a week from Oct to Jun via Comcast streaming or 'alternative' streams).

The crashes happened multiple times every game. After maybe 10 games of this, I began closing and reopening the browser during every game break. I've experienced zero crashes since doing that.

When the crashes started, I was using Chrome - but I still experienced BSOD crashes when I switched and tested Fox and Brave. Just very odd to start happening suddenly without any changes to my machine that I could pinpoint - no upgraded bios or nvidia that I can recall.


> with 64kb of ram running windows 11

I hope you mean GB.


haha... yea, I was a bit off

As an aside, I'll use this as a way to update my post. I found a discussion of someone having a similar problem and turns out it was a windows office driver relating to automatic office updates. I turned it off which has resolved the issue.


I would still add that users running out of monthly mobile data volume are still a big issue, likely bigger than slow phones. They can't load most websites with 64 kbit/s, because they are multiple megabytes large, often without good reason.

For example, when Musk took over Twitter, he actually fixed this issue for some time, I tested it. But now they have regressed again. The website will simply not show your timeline on a slow connection. It will show an error message instead. Why would slow connections result in an error message?!

A simple solution that e.g. Facebook (though apparently not Threads) and Google use, is to first load the text content and the (large) images later. But many websites instead don't load anything and just time out. Probably because of overly large dependencies like heavy JavaScript libraries and things like that.


I believe HTML, CSS and JS needs an overhaul. There’ll be a point where maintaining backwards compatibility will result in more harm than benefit. Make a new opt-in version of the three which are brutally simplified. Deprecate the old HTML/CSS/JS, to be EOL’d in 2100.


I was expecting this to go one level deeper and point out that bloated sites that are critical, like: banking, medical, government -- can lead to problems paying bills or getting timely information (especially in the case of medical situations that aren't quite emergencies but close to it).


Mind that not only low-end or old phones have slow CPUs.

Both the $999 Librem 5 and the $1999 Liberty Phone (latest models) have an i.MX8M, which means they have similar processing power as the $50 phones the article is talking about.

I tried to log into Pastebin today. The Cloudflare check took several minutes.


It also impacts users with fast devices.

When I load a bloated website on an iPhone 15 Pro Max over a Unifi AP 7 Pro access point connected to a 1.2Gb WAN, it’s still a slow bloated website.

If you build websites, do as much as you possibly can on the server.

As an industry, how can we get more people to understand this?


I think bloat could be prevented if it was noticed the moment it is introduced.

After application evolves bloated, it's difficult to go back and un-bloat it.

Bloat is often introduced accidential/y, without need, and unnoticed just because developers test on modern and powerful devices.

If developer's regular test matrix included a device with minimal hardware pewer that was known to run the product smoothly in the past, the dev could immediately notice the newly introduced bloat and remove it.

A bloat regression testing.

I call this "ecological development".

We should all do this. No need to aim for devices that already have trouble running your app / website. But take a device that works today and test that you do not degrade with respect to this device.


> After application evolves bloated, it's difficult to go back and un-bloat it.

It will be hard to get to pristine quality, but there ought to be some amount of low hanging fruit, where minimal changes bring noticeable improvement.


Maybe, but determining it will take some investigation. If the regular testing is done on a low profile device, developer knows as soon as possible that his recent changes introduced a bloat regression.


Really need to see some lighter weight CSS tools combined with HTMX for a lot of general use.

That or better crafting for web applications. It feels painful to me when I see payloads in excess of 4mb of compressed JS. It makes me really want to embrace wasm frameworks like yew or leptos. Things are crazy and nobody seems to notice or care.

I run a relatively dated phone (Pixel 4a) with text set to max size. So many things are broken or unusable.


The web is a communication medium: having bad delivery is going to impact the efficacy of the message. I've worked as both a developer and a designer, and as a developer I've certainly had to push back against content-focused people requesting things they didn't realize were, frankly, bananas. Tech isn't their job, so it was my job to surface those problems before they arose. However, as a designer, I've also had to push back against developers that refused to acknowledge that technical purity is a means to an end, not an end in itself. Something looking the same in lynx and firefox isn't a useful goal in any situation I've encountered, and the only people that think a gopher resource has better UX than a modern webpage stare at code editors all day long.

No matter who it is, when people visualize how to solve a problem, they see how their area of concern contributes more clearly than others'. It's easy to visualize how our contributions will help solve a problem, and also hard to look past how doing something else will negatively impact your tasks. In reality, this medium requires a nuanced balance of considerations that depend on what you need to communicate, why, and to whom. Being useful on a team requires knowing when to interject with your professional expertise, but also know when it's more important to trust other professionals to do their jobs.


I travel a lot and experience a wide range of internet connection speeds and latencies. Hotel Wi-Fi can be horrible.

The web is clearly not designed for or tested on slow connections. UIs feel unresponsive and broken because no one thought that an action might take seconds to load.

Even back home in Germany, we have really unreliable mobile internet. I designed the interactive bits of All About Berlin for people on the U-Bahn, not just office workers on M3 Macbooks with fiber internet.


This is why I'm excited for Web Assembly. Writing an efficient high performance, mutli-threaded GUI in Rust or Go would be awesome.

Just waiting on it to be practically usable


I wouldn't be so sure. The browser ultimately need to render the UI from the DOM which is intrinsically linked with JavaScript. Wasm can help for some of the application logic maybe. But it also comes at a cost of some fixed overhead to bring up the wasm blob. JavaScript performance aren't that bad for UI.


The argument I'm making is one for increased choice in the technologies we have available to us as web developers, particularly when it comes to writing applications that aim to accommodate users with slower/more affordable devices.

In a lot of cases, shipping a wasm blob is not that bad. Depending on the language; compressed and optimized blobs are only a few kilobytes and can be "code-split" such that they are lazy loaded.

Wasm binaries also evaluate faster than JavaScript which could lead to better "web vital" metrics (like "time to first paint/interactive", though anyone can write a slow application in any language so that's not a given).

Syntactically, there is nothing missing from other programming languages that make them unsuited to working with DOM APIs.

The real blocker is waiting for wasm to gain access to DOM APIs thereby dropping the need for JavaScript glue code - once that constraint is lifted, I see no reason why it wouldn't be practical to use an alternative to JavaScript for web development.

JavaScript is practical and useful in _most_ web development cases because the runtime is built in, the ecosystem around it is quite mature and the use cases aren't that complex - however there are many cases that would benefit from more performant languages (think YouTube, Jira, VSCode, banking apps, etc).

When thinking about the "next billion" internet users (related to the topic of the article), we are going to see more users with affordable devices that feature many cores with lower clock speeds. In such contexts it is vital to efficiently leverage the available client device hardware resources - something that JavaScript isn't well suited to and something most of us don't experience (the privilege of having fast computing hardware).

Take multi-threading as an example. JavaScript's single threaded non-blocking nature is safe/ergonomic from a programming perspective and fantastic on devices that feature fast individual cores - however it falls over when clients have slower devices with more cores.

Ideally, we need _the choice_ to follow a more conventional multi-threaded model in cases where we care about performance (a UI thread + worker threads, like we see in native apps/desktop applications). The only initiative that has the potential to get us there is wasm.


Because non-web applications are always very efficient and high performance, right?

The problem isn't the technologies available to us. Majority of devs just have no desire to write efficient code.


I hear you. I think it's important to lead by example here and demonstrate the value of performant software that is accessible to a wider range of users/devices.

Native apps do demonstrate that there are performance gains to be made with languages better suited for runtime performance (think threads, AOT compilation, etc)

Just because there is currently a culture in the web development world that de-emphasises the importance of performance, doesn't mean we shouldn't strive to offer better tools to developers.

Right now, wasm is the only feature on the web that has the potential to offer that capability to developers - so I'd say that it's worth wanting and asking for the choice to use better tools for web applications.


Highly Gamed === It is better if users with slow devices see a white screen for 30 seconds vs an indication that something is happening, because ... reasons?


You missed the point, which is that it's better if users with slow devices see actually useful content rather than a splash screen.


>Just as an aside, something I've found funny for a long time is that I get quite a bit of hate mail about the styling on this page (and a similar volume of appreciation mail)

Yes! I've definitely felt like this while using his website. Of course, today I just fixed it with

main { max-width: 720px; margin: 0 auto; }

but tbh, I don't want to install an extension to customise the css on this one site...


I have never understood why web browser designers don't care to provide a default style sheet that makes unstyled web pages look nice i.e. with proper spacing of elements and sizes of headings etc.


I've always wondered why people removed parts of the page when they were scrolled out. Like, don't you think the browser would already optimize for that? And even if it's not stored in the DOM, it's still being stored in the JavaScript's memory. It's frustrating when people try to reimplement optimizations that the browser already does better.


The browser does not in fact optimize that. Yes it's surprising. If you want it to do basic optimizations like not rendering invisible content you need to give it hints via obscure and relatively recent CSS rules nobody ever heard of.


YouTube is one of the slowest websites I have ever used.

It takes several seconds to load, even with moderate hardware and fast internet connections.


Reddit for me is the slowest site. And while old.reddit fixes this they try to steer you back to main reddit at any opportunity!


RES fixes this, i think. It's a browser extension that forces everything to stay the way it was when reddit worked fine - before publishers bought it.

I don't have any issue with reddit usability, although i do use it a lot less since they nuked my cellphone app from orbit as a cash grab.


Same, but i'm using lemmy more


YouTube doesn’t feel zippy as a website but the reliable and speed of videos have been very good for me. I remember the days when buffering videos was hell.


I remember watching YouTube in 720p HD back in 2009 on a mid-range laptop of the era and it felt faster than the current experience on an M1 where the page often stutters and takes seconds to load.

As far as I know, nothing changed on the video detail page to justify such a huge performance degradation. There's still a player, there are still comments, there are ads and suggested videos.

Everyone working on that pile of shit should be ashamed. They would've been better off literally doing nothing and just enjoy the incremental performance gains as the hardware got faster.


Some might be interested in pre-compressing their sites:

  https://gitlab.com/edneville/gzip-disk
It doesn't stop client CPU burn, but it might help get data to the client device without on-the-fly compression a bit quicker, which in my experience is helpful from the server side too.


Compare with one of my projects, [1]

It is a minimal, though modern-looking web chat. The HTML, CSS and JS together is 5024 bytes. The Rust backend source is 2801 bytes. It does not pull in anything from anywhere.

[1] https://github.com/coolcoder613eb/minchat


Clarification: The frontend does not pull anything, the backend pulls in libraries for websockets and json using cargo.


Is there a demo site or screenshots somewhere? Add them to README :-)


I added a screenshot.


A simple text site such as Reddit and some Digg clones are nearly unusable under an Item ATOM with a JS based client.


I think Web bloat started with pretty urls, they provide nothing on top of traditional urls yet every request has to parse them unnecessarily. It's such a waste on a huge scale, especially for slow languages plus the expensive regex processing as well.


Also related: Performance Inequality Gap 2024 https://infrequently.org/2024/01/performance-inequality-gap-...


>There are two attitudes on display here which I see in a lot of software folks. First, that CPU speed is infinite and one shouldn't worry about CPU optimization. And second, that gigantic speedups from hardware should be expected and the only reason hardware engineers wouldn't achieve them is due to spectacular incompetence, so the slow software should be blamed on hardware engineers, not software engineers.

Not just the quote but the whole piece. I am glad this was brought out by Dan, and gets enough attentions to be upvoted. ( Although most are focusing on Server Rendering vs Client Side Rendering; Sigh) A lot of what it said would have been downvoted to oblivion on HN. A multi billion dollar company CTO once commented on HN, why should I know anything about CPU or Foundry as long as they give performance improvements every few years.

Not only Jeff Atwood, there are plenty of other Software developers, from programming languages authors, backend and Frontend Frameworks authors, with hundreds of thousands of followers, continue to pump out views like Jeff on social media. Without the actual understanding of hardware nor the business or selling IPs or physical goods.

Hardware Engineers has to battle with Physics. And yet gets zero appreciation. Most of the appreciations you see now around Tech circle are completely "new". For a long time no one heard of TSMC. ASML wasn't even known until Intel loss its leading node. Zero understanding of CPU design nor even basic development cycles. How it will takes years just to get a new CPU out. And somehow hate Qualcomm because they didn't innovate. A company that spends the highest percentage of revenue on R&D in tech industry.


How the lack of design impacts users that want to read your blog post


I've had this same experience with low-bandwidth situations while traveling: more than a few times I've cursed Apple for not making iOS engineers test with 3G or even 2G connections.


The entire "Premature optimization is the root of all evil" notion should be considered harmful. That one idea has completely destroyed the end user experience.


My own recent experience with this - i run a small sass web app and about a year ago i decided to partner with advertising company to help with the grow.

Part of the plan was that they will remake our static homepage in Wordpress bc it will be easier to manage it for them and also easier to add a blog, which was part of the new plan. I know Wordpress is slow and i would say unnecessary also but i said yes bc i did not want to micromanage them.

A year later we parted our ways and i was left with WP where the page load was abysmal(3-5 seconds) and about 10Mb of bs. There was something called "Oxy" or "Oxy builder" which would add a tons of style,js and clutter to the markup and kind of SPA page load style but horribly failing.

So now i migrated the site to Jekyll, got rid of all the bs and it's back fast. And for me also again possible to really improve.

So for my businesses i'm not touching WP ever again and that will be a huge bloat reduction in itself


Seems like your issues were not with WP itself, but with whatever plugins and themes were added to it. Avoiding WP entirely for this is like avoiding a programming language because the 1 developer you had experience with sucked at it. WP itself can be very fast, as is evident by a ton of high profile sites running it (CSS-Tricks, TechCrunch, New York Times, Time Magazine, etc). I'm not a fan of WP myself, but that's just because I don't like how its built and how it entirely avoids modern programming standards, not because it is slow, which it most definitely doesn't have to be.


Okta has a speed test?


Presumably Ookla.


Does anyone know how to measure CPU time on the main thread for a page load? I want to benchmark this for my own site.


Browsers should only display documents, not apps.

That's what operating systems are for.

Just give native apps what made the web popular in the first place:

• Ability to instantly launch any app just by typing its "name"

• No need to download or install anything

• Ability to revisit any part of an app just by copy/pasting some text and sharing it with anyone.

All that is what appears and matters to users in the end.

--

But I suppose people who would disagree with this really want:

• The ability to snoop and track people across apps (via shit like third-party cookies etc)


I wonder how much of the bloat of modern shiny internet widgets is pure lipstick that does not add any tangible value.


I have modern I7, 64GB RAM, RTX3090, a 7gbps NVME SSD and a 1Gbps internet connection. Can run pretty much any game maxxed out in 4k with 100 fps. Download 100GB files in few minutes. Can do all sorts of tasks and workloads. Can calculate the 20th billionth number of PI in a microsecond. What I cant do however is use twitter without stutters and hitches, or windows, or any shopping website.

Nice work, webdevelopers!


> or any shopping website

Could you give an example? I used a shopping website yesterday, both on a laptop and on an android phone, and apart from cookie banner popups (design choice, not hardware limitation), did not have any significant inconveniences.


Amazon, Ebay, Yeezy (which was better than most), Armani, Nike, any assortment of the regular webshopping webiste, even computer parts shopping websites lol. They're all slow and a glithcy mess. They're all horrible. Trying to browse them on my old 13inch screen thinkpad is akin to torture.


People can only comfortably read a maximum of 17 words per line. Best is 12. That text should be in two columns.


68k.news loads fine, it's probably that the people writing your applications are not great at their jobs?


Has someone attempted to do the math on how much CO2 is emitted because of needless bloat and adds?


> As sites have optimized for LCP, it's not uncommon to have a large paint (update) that's completely useless to the user, with the actual content of the page appearing well after the LCP

Aahh yes, the “I’ve loaded in my 38 different loading-shimmer-boxes, now kindly wait another 30 seconds while each of them loads more”

Can we go back to “your page is loaded when _everything _ finishes loading” and not these unhelpful micro-metrics web devs are using to lie to themselves and users about the performance of their site?


I don't care. Upgrade your device. You don't make me money.


Anyone know how to buy an Itel P32 in the USA? :/


PUBG runs on 60fps , Web runs 0.4fps. Oh No Optimization


My 2017 i7 MacBook Pro struggles on websites. It’s absurd.


the web is a pile of horse shit why is this even news. the best part is how all the SJW apple tesla cloud smart tech yuppies in tech dont care about how 99% of the world who cant afford to buy a new machine every year have an experience on their product worse in every way than dial up as they force every formerly paper transaction onto web. just opening firefox with blank home page can take deciseconds and even minutes. even opening a new blank tab is unresponsive and lags up the UI. on anything but mid-high range desktop hardware.

how does this even have 200 upvotes? i cant count more than 1 or 2 websites that doesnt have infinite bloat for useless nonsense like the cookie popup social media whatever 10 meme frameworks and 100 js libs injected into the page. HNers just read "bad stuff bad", respond "yup" like a zombie, and continue doing bad stuff


This is a manifestation of Wirth's law, again.


This is bad from a global warming perspective.


Nobody, nobody, nobody cares about old hardware, performance, users, etc. if anyone cared, React wouldn't be a success. The last time I tried to use the react website on an old phone, it was slow as hell.

LetsEncrypt is stopping serving Android 7 this year. Android 7 will be blocked from 19% of the web: https://letsencrypt.org/2023/07/10/cross-sign-expiration The option is to install Firefox.

Users with old hardware are poor people. Nobody wants poor people around, not even using their website.

“Fuck the user”, that's what we heard from a PO when we tried to defend users, imagine if we tried to defend poor users.


I think Let's Encrypt made a heroic effort. They deployed a hack to support Androids long abandoned by the operating system maintainer and manufacturer. If you want to blame LE for the breakage then also blame: GOOG for using the IBM PC clone business model without a device tree standard, QCOM for selling chips but very quickly cutting support, the manufacturer, and cellular carriers who prefer to lock you into another 24 month installment plan than approve an update for your existing handset.


> If you want to blame LE for the breakage then also blame ...

Of course they are also guilty. LE isn't the most to blame in reality, it's just an example that old hardware isn't important to decision makers.


The problem is that this attitude infects even government departments, which ought to serve all citizens, not just the rich ones.


React is successful because of the tech/VC bubble, not because it's some miracle technology.

The actual websites where React is useful can be measured in single-digit percentages (effectively a full-blown application requiring a desktop-like experience, think a trading terminal). It is overkill for everything else.


> PO

What's the acronym?

Unfortunately acronyms are context sensitive and many users here are not in your context... Maybe try to avoid using acronyms!


Product Owner


Product owner?


It impacts me, and I have a fast device!


web bloat also impacts my sanity


What I noticed more and more is me using alternative front-end or deliberately changing my user-agent to some old browser in some sites that still have some legacy version


> While reviews note that you can run PUBG and other 3D games with decent performance on a Tecno Spark 8C, this doesn't mean that the device is fast enough to read posts on modern text-centric social media platforms or modern text-centric web forums. While 40fps is achievable in PUBG, we can easily see less than 0.4fps when scrolling on these sites.

Remember this the next time marketing asks the frontend team to implement that new tracking script and everyone assumes that users won't even be able to tell the difference.


The sad part being that traditional marketing cares very little about these users outside of the aggregation parts.

When the goal is to make people pay, a base strategy is to target user who are already spending money. So "fast enough on a current [device sales team is using]" becomes the baseline, and optimizing for older/weaker/cheaper environments isn't an proposition that will convince.

Except when you're ad supported. Then the balance will be a bit more in the middle.


I imagine it has more to do with the monstrous website design than the tracking scripts. New reddit vs old reddit or desktop reddit vs mobile Reddit shouldn't be that different in terms of tracking. But the newer ones run like ass.


Reddit doesn't even run satisfactorily in my gaming laptop. I can run AAA games but a website is noticeably slow.


Just curious, are you using old.reddit.com?


One time long ago, e-commerce company i worked for decided to add tiktok analytics to the front-end. The dev team added the changes but were worried it might impact performance and UX. As a solution we were told to run the performance tests to check it.

The performance tests were created to mimic user behaviour but only involved company APIs. Not third party requests. No one in the top level, cared about this bit of information. We ran this performance test and saw the the response times are almost the same so it's time to pat ourselves on the back and move on ...


Did no one call bullshit on the test before running it? Personally I'd just flat out refuse to run the test, likely designing the proper test comparing while third party scripts where enabled.

Management and product owners should understand how these things work, and shouldn't ask for bogus data when they do. But teams implementing the changes should just flat out refuse when they know the request isn't reasonable.


Sir, in most companies if you suggest something technical without having equivalent political power, at best, no one will listen to you. At worst you will create political enemies.

Probably there was an SDE-2 or SDE-3 who called bullshit on it and got ignored.


You call bullshit on it by either refusing to run the test, or better and more helpfully by running a test that answers the performance question.

I've seen these kinds of requests plenty of ways. Sometimes those asking include a design or specs because they honestly thought that was the right way to do it, other times they are knowingly asking for (in this case) a useless test to check a box. In either case, IMO the right response is to ask questions to clarify the goals and build to that, changing the provided design or specs if necessary.

I've had to play this out dozens of times over the years and never earned enemies from if, at one point I won over the PM leader that everyone on the dev team warned me about. Its all about tact and approach, assume everyone is on the up and up and just ask good questions to clarify the goals. Its hard to get mad at that unless its done in a condescending or argumentative way.


To be fair, it is usually difficult to tell the difference between 243 tracking scripts and 244.


Modern webdev is fugged


(But don't under any circumstances break the four other trackers already running on the site.)


You mean the four new ones they added last week alone, right?


Newpaper sites are notorious for this.


These days they coerce the dev team into implementing a tag manager so they can add their filthy trackers without asking the dev team.


The "they" here can't really coerce the dev team unless the dev team is willing to comply. Refusing to implement an unethical feature is always an option, and given that we're often considered engineers it is well within our right to deem something unsafe or against best practices.


I hate all that tracking and marketing bs as much as the next guy but if the marketing team is the main stakeholder and is responsible for the budget that won't work. I also might be a bit biased as a freelancer but every team I worked in so far had other freelancers on it and if we strongly recommend aginst a practice but the client insisted then we basically had the choice to either abandon the project (and therefore our current source of income) or simply do what they say. I would love to be on a position where refusing is an option that would not cost me my gig.


This thread really has no purpose if we don't see it as enough of a problem to stand against. I really don't mean that to sound like I'm on a high horse (I'm sure it still sounds that way). There's nothing wrong with being okay with the trade offs, but we don't get to implement these features and complain about how bad they are.


> Remember this the next time marketing asks the frontend team to implement that new tracking script and everyone assumes that users won't even be able to tell the difference.

I mean, maybe they can but the business doesn't care. If you polled "users" of cable television I doubt anyone would say they prefer the experience of commercials.


PUBG is now a very special beast: It's CPU bound = we are unlikely to ever see a "AAA" game with anything beyond it's complexity for eternity. You can run it on a 1030 GPU at 60 FPS.


It's not like websites are GPU bound


No but most games are. Thus PUBG is an outlier.


Somebody has to be working on a Simcity or Civilization MMO.


I wish! The truth is server and client programmers rarely get along so persistent MMOs with alot of moving parts are only going to happen once one developer is schizo enough to do both well. AAA will never be able to do it.


Just throw a ticket in jira for these stupid devs to "make it faster".


To make Jira faster ?


Firing up my neoliberal brain.

We should just tax ad and spying on users bandwidth and front/backend end resource use.


Since 2000, I've observed the internet shift from free sharing of information to aggressive monetization of every piece of knowledge. So I suspect that is the culprit. If you use the mobile web on the latest iPhone you'll find its unusable without an ad-blocker.


Hm, not entirely true, depending on what you mean with "the internet shifting".

The internet has grown, and the free sharing is still going strong. Have a look at Wikipedia, Hacker News, Arxiv.org.

To be honest, the stuff that was shared freely in 2000 was not all that great, and most of that which was, is still available. Remember that you had to buy a subscription to Encyclopaedia Britannica back then, and to all the academic journals.

Granted, there are some non-free information silos, but generally I'm pretty happy with the procrastination advice on Reddit being surrounded by annoying ads that drive me away.


Google "Roche Ff7 rebirth". I was curious who this character is. In 2000-2012 all the top links would be amazing fan sites and forums describing, discussing, and detailing the character with rich info.

Now it's all AI seo spam LADEN with data mining and ad boat on monolithic sites like Fandom they barely work on the newest iphone.


Encyclopedia Britannica was on CDs, not on the internet. I'm old enough to remember.


> In 1994 Britannica debuted the first Internet-based encyclopaedia. Users paid a fee to access the information, which was located at http://www.eb.com

https://www.britannica.com/topic/Encyclopaedia-Britannica-En...

(Be warned, there are ads on that page.)


And Britannica wasn’t filled with highly moderated propaganda.

Wikipedia is a failed experiment.


Wikipedia is great, it's just not as good as it could be


the tragedy of the commons


Here is how you do web: https://forum.dlang.org/ Observe the speed


This loads faster than native apps serving local content on my device.


For me, it took an estimated 3-5s to load on first visit. Fast, but not "faster than native apps"

The second time round it loaded almost instantly.

I'm guessing there's some caching going on.


Ironically the modern web, built by programmers, is scorned by programmers. You all collectively, persistently, shamelessly decided AngularReactNodeWebpackViteBloat 200mb asynchronous hellspawn websites needed to be made this way.

When all this time, lightweight CSS and anchor links and some PHP was all we needed.


*built by techbros


What a refreshing experience.


Or HN. All that talk about "brutalist web design" yet most websites are more bloated than ever...


'Brutalist web design' is a pretty small niche though, no? It's the kind of thing Hacker News readers will have heard of, but I don't think it was ever close to mainstream.


IIRC, the D forum also offers direct NNTP access. Would be interesting to compare web access with e.g. tin on a variety of devices…


How crude. I can’t even post gifs. This is basically a glorified e-mail client, but with extra steps. No social media integration? What is this 2004? It’s not even decentralized like matrix.

Can’t even post inline videos, bro.

\s

Jokes aside, I do miss this type of interaction. Especially for open source projects. It made finding solutions to common issues much easier when documentation was lacking or has not been updated in a long time.

Now all or most projects have adopted some form of: discord channel, slack group, subreddit, twitter. I remember searching for my similar issue in a slack channel only to realize the chat history has been limited because the owners did not pay the extra amount to archive messages beyond what was given for free.


Missing text styling impacts all users. The text is hardly legible. You really don't need much styling (bloat) to get a good result, as demonstrated on http://bettermotherfuckingwebsite.com



addressed in the article


google web engines (blink/geeko) and apple web engines with their SDK are sickening. They are an insult to sanity. They well deserve their hate.


The engines are perfectly fine.

It's the websites/web developers that are the problem.


I don't agree, the web devs are making it worse.


re: Wordpress - with which theme? benchmarked on default theme they give away free like "2024" or whatever ?

obvs a good coder optimizes their own theme to get 100% score on lighthouse.


Perhaps these people are better off by running a web browser on a remote machine and interfacing with it over VNC.


This is trolling, right?

Lemme just give my grandma a list of instructions for doing this so she can get to Facebook. I'll let you know how it works out.


Obviously you'd want to productize it (see WebTV, Mighty browser).


Who's going to pay for that server? We're talking about $50-100 phones here.


webdevs and their managers should use these web "apps" on bad machine over VNC on a slow connection for a few months. these javascript hellpages are basically crime against humanity and do contribute a lot to e-waste, pollution and carbon dioxide emissions




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: