Hacker News new | past | comments | ask | show | jobs | submit login
I can only think that modern front end development has failed (twitter.com/antirez)
546 points by gls2ro on April 4, 2021 | hide | past | favorite | 503 comments



What upsets and concerns me the most is when I see poorly developed SPA on really important sites. For example, government service application websites. If reddit or nytimes has a bloated, intermittently failing SPA site, that's an annoyance. When it's a form to apply for unemployment, ACA health care, DMV, or other critical services, it's a critical failure. Especially since these services are most often used by exactly the population most impacted by bloated SPA (they tend to have slow or unreliable internet and slow computers, maybe even a cheap android phone is all they have).

Such sites should be using minimal or no JS. These aren't meant to be pretty interactive sites, they need to be solid bulletproof sites so people can get critical services. And I haven't even mentioned how SPA sites often lack any accessibility features (which is so much easier to implement if sticking to standard HTML+CSS and no/minimal JS).


Yeah what's weird is that there's is an entire generation of developers who think of SPA as the default.

They think that server side is slower because you have to send down more data, or you have to wait for the server to generate HTML.

Quite the contrary, it's slower to send down 1 MB or 10 MB of JavaScript to render a page, than to simply send down a 100 KB HTML page. Even if you need some JS also, browsers know how to render concurrently with downloads, as long as you provide enough HTML.

Rendering HTML on a server side Intel/AMD/whatever CPU is way faster than rendering it on a mobile device (and probably more efficient too).

Even if it weren't faster and more efficient, it would save battery power on the client.

And there is a ton of latency on the client side these days, ignoring network issues. There are ways of using the DOM that are expensive, and a lot of apps and frameworks seem to tickle those pathological cases. These 20-year-old browser codebases don't seem to be great workloads for even modern Android or iPhone devices.

---

edit: To be fair, I think what's driving this is that many sites have mobile apps and web apps now, and mobile apps are prioritized because they have more permissions on the device. (This is obvious when you look at what happened to Reddit, etc.)

It's indeed a more consistent architecture to do state management all on the client. Doing a mix of state on the server and state on the client is a recipe for confusion -- now you have to synchronize it.

Still there are plenty of apps that are website-only, like the government sites people are talking about. Those people appear to be copying the slow architecture of the dual mobile+web clients and getting a result that's worse.


Thinking about it a bit more, Reddit is a really good example because it launched and got popular before the iPhone or the App Store existed (2006 or so). It was a fast and minimal site back then, similar to what Google used to be.

Either the existence of the mobile app, or the drive to get people to install it, ruined the website. It's slower and has poorer usability than 5 years ago, and also 15 years ago.

Twitter is another company that existed before the App Store and the SPA trend. I noticed they recently turned off the ability to see the 280 chars you care about without having JavaScript on :-(

It's trivial to have a no-JS fallback, and they had it, but turned it off.


Imo Reddit ruined their site on purpose as that makes users go to the app instead which adblockers have a hard time blocking ads on (unless you have pi hole setup). They also added a ton of extra wasted space to make users scroll longer and thus stuff more ads in and increase user times.


This isn't an opinion, this is a proven fact given that Reddit blocks many subreddits from being accessed from the browser when using the default UI, redirecting to their app, for no reason other than moving users to the app. They do not even pretend for there to be any other reason for they have never given any explanation as to why they do this. It's some of the most user-hostile behaviour I've seen, even the usual suspects don't block basic access.


Good to hear there’s fact behind my opinion. I hadn’t researched it deep enough but just figured it out based on my own experience on their site. I don’t visit Reddit often for 2 years now unless I really need to. And when I do, I go to the old.Reddit subdomain.

Question: which subreddits does Reddit block from the browser? Genuinely curious. Are they also blocked from third party apps like Apollo?


Indeed about twitter, I won't follow about 95% of links there due to the issue you mention.


I've always taken "server side is slower" to mean "slower to host" in context of web pages. Sure it may blow chunks to parse 10 MB of JS to click a button and leave on the client side but that's 10 MB of static JS served via dirt cheap anycast style CDN and it doesn't matter if you have 1 client or 100,000 clients you could host it on a single server's worth of CDN which doesn't have to worry about any client state ever (and for the tiny percentage of client state you do need to manage it might be hosted on a much smaller solution that handles just that logic).

I'm more of the classic take that the "problem" is performance has continued to grow meaning we get more stuff made faster but the tradeoff is it runs at the same speed. Client side or server side there is no reason the app needs 10 MB of JS logic to do its job it just made it quicker and easier to deploy to have it use 10 MB of JS logic. For some things like required government services this is a real problem but for most things this is just reality - how fast a piece of software is isn't the only benchmark software is made against, often not even in the top 3 things it's checked against.


Take away React, Vue, Angular, or similar away from most current front end developers and there is panic. When I say panic I mean full insanity panic like abandoning the profession or completely going postal.

——

A simple checklist to provide superior front end applications:

* Don’t use this. You (general hypothetical you) probably don’t realize how easily you can live without it only because you have never tried. Doing so will dramatically shrink and untangle your spaghetti code.

* Don’t use addEventListener for assigning events. That method was added around the time of ES5 to minimize disruption so that marketers could add a bunch of advertising, spyware, and metric nonsense everywhere more easily without asking permission from developers. That method complicated code management, is potentially point of memory leaks, and performs more slowly.

* Don’t use querySelectors. They are a crutch popularized by the sizzle utility of jQuery. These are super epic slow and limit the creative expression of your developers because there is so much they can’t do compared to other means of accessing the DOM.

I now add ESLint rules to my code to automate enforcement of that tiny checklist.


This is a silly list. What? You really think document.getElementById is going to be faster than querySelector("#foo")? You really think there is a meaningful difference between other methods of adding event handlers, other than it being way hackier to add more than one with the other methods?

I have no idea what your objection to "this" is. Object Oriented code can be bad sometimes but so can procedural code.


This person has a weird unjustified objection to querySelector.

I remembered this same conversation from 8 months ago where I pointed out that parsing something like '#foo' is NOT slow, he argued against it, 5 people pointed out the same thing, and he was not receptive to that feedback, etc.

https://news.ycombinator.com/item?id=24054745

Performance is hard; optimizing the wrong thing is also a problem ...


I guess you ignored this from that thread: https://news.ycombinator.com/item?id=24057163

If jsPerf is down try http://jsbench.github.io/


What precisely do you think a microbenchmark taken out of context is going to prove?


That a 250,000x performance difference is a very large gap. It’s the difference between walking 1 foot or 48 miles.


Or the difference between walking 1 femtometer or a quarter of a nanometer in a day where you're planning to hike 10km. And that's assuming the relative difference holds to the same degree in a real app, which seems unlikely.

P.s. i also didn't get that level of difference when i tried to run the microbenchmark but i didn't try very hard.


I think a bit of context about `this` is important.

Using `this` in a callback -- where you expect your caller to provide your local binding -- is straight-up bad juju. Pass arguments instead.

Using `this` in an object instance to namespace is just fine. The entire point of having an object is that you get these nice encapsulated namespaces which can send messages to each other.

One suggestion, and a related ask: rather than just saying "Don't do FOO", provide alternatives as well; e.g., "Don't use `addEventListener`, use `superFooBarBazzer`".

Bonus points awarded for runnable examples. :)

Perhaps my biggest gripe about the entire Javascript ecosystem is finding libraries that provide documentation that looks like this:

    // See how easy my library is to use!
    myLibrary.doTheThing(fooBar, bazBang);
Without ever showing you what a `fooBar` or `bazBang` are, and perhaps where and/or how an enterprising developer might obtain one.


https://github.com/prettydiff/share-file-systems

40k loc. I wrote an ESLint rule to error on this in both statements and expressions.


Your opening sentence is weirdly derogatory...

And I think these are all pretty far from the main performance issues with common apps FWIW. These are more low level coding issues, whereas the real slowness (e.g. 10x or 100x issues) comes from architecture, dependencies, network usage, data structures, reflows, etc.


> comes from architecture, dependencies, network usage, data structures, reflows, etc.

That is tech debt and complexity. If you start from a simplicity first perspective most of what you list evaporates as a matter of habit.


I know this won't count for much, but as an amatuer programmer who has done a lot of web scraping at my job due to having a restricted corporate environment, its a breath of fresh air to work with the few websites that actually assign IDs to elements, don't autogenerate all graphic elements and feel like whoever wrote them actually cared. Especially if you are trying to parse it with something stupid like Excel VBA. Trying to reverse engineer what happens when you click a button on a site with 20 event listeners tied to "submit" is a freakish nightmare.


What do you use instead of these constructs?


Instead of this I use explicitly named variables and functions.

Instead of addEventListener I assign event handlers directly to the event property of the DOM: button.onclick = whatever. This forces simplicity and cleaner code management because only one function can be assigned to any given property.

Instead of querySelectors I use other standard DOM methods.


Other standard DOM methods like what? getElementById and getElementsByClassName?


Try this for a starter guide: https://prettydiff.com/2/guide/unrelated_dom.xhtml

For a practical example try this: https://github.com/prettydiff/semanticText


I also want to know this.


Try this for a starter guide: https://prettydiff.com/2/guide/unrelated_dom.xhtml

For a practical example try this: https://github.com/prettydiff/semanticText


I believe Svelte has the best outlook for developers wanting to build with JS as it doesn’t load an entire framework upfront (Angular/React/etc). In regards to server side languages being slow, as you pointed out, it’s really not the case; in my experience the culprit to slow applications is the database design.


> They think that server side is slower because you have to send down more data, or you have to wait for the server to generate HTML.

> Quite the contrary, it's slower to send down 1 MB or 10 MB of JavaScript to render a page, than to simply send down a 100 KB HTML page. Even if you need some JS also, browsers know how to render concurrently with downloads, as long as you provide enough HTML.

The argument I remember from years ago used "slower" as a bad simplification. What it actually meant was, doing the rendering server-side wasted CPU the server could be using to respond to another request. Instead, just send the data and distribute some of this processing to all your users by way of client-side rendering.

Also, back then bundling was a lot rarer than it is now, so the large libraries that made of most of those 10 MB javascript files would be separate files the browser can keep cached.


I am going to :+1: the UK census site - built on code developed by the gov.UK digital service, it has been apparently bulletproof at taking 20 million plus individual households through a moderately complex survey.

sometimes it can be done right.

And it uses a framework :-)


Yeah I was pretty impressed with it too. One thing I liked, it does lots of error checking along the way to pick up accidental screw ups.

Eg. It says ‘what is your date of birth?’ And then on the next page it says. ‘You are X years old. Is that correct?’


Not quite bulletproof. I found a couple of bugs while using it.

At the end, just before submission it showed the completed sections, and you could look at each section to see a tidy summary of the answers provided. Except for the section 1, where doing that jumped to the last of the section's questions instead.

I wanted to see one of the answers I had given in the section 1 to check before committing, and due to the missing summary page, tried stepping backwards and forwards through each question. All were shown, except the question I wanted to check (and had answered) was skipped.

Inspired to try things, I added a non-existent person to the household, then removed the non-existent person.

After that, when I stepped through all questions in section 1 it included the question and answer I'd been looking for, allowing me to confirm it was correct before submission.


Consider filing a bug report: https://census.gov.uk/en/web-form/


Thanks. I already did so via the feedback form at the end.


gov.uk is such a glaring exception to government sites around the world I can only imagine there was some massive screw up where the developers were allowed to go off and build a fast, accessible and responsive site on their own without the requisite ten layers of committees, meetings and expensive external consultants. I hope there is a government inquiry to ensure this doesn't happen again and that taxpayers money is properly squandered on massive IT failures as per standard practice.


This gave me a good laugh, thanks for that ;)


Yeah, I was impressed enough to fill out the feedback form at the end. It is probably the first time I've used a feedback form for compliments in my life.


Indeed - I used this site the other day. My reaction at the time was "This is the nicest web application I've used in decades." Web design that good is so rare now, it really stands out.


Which framework?


If my memory serves me right, it's Python/Django. I contracted in that department a few years ago.

Edit: Just remembered it's all open source too! Look here and search for repos beginning "eq-". Survey runner is the app I believe. When I was there the plan was to use that, though perhaps they eventually wrote a fresh app. Anyway the code should all be there https://github.com/ONSdigital?q=Eq&type=&language=&sort=


That was notably one of the best web experiences I’ve used recently. Gov.uk is exceptionally good.


https://docs.publishing.service.gov.uk/

Looks like the core site is Rails but it seems they use a bit of everything. Preact, Go, Node, etc


Wagtail


Why do these things even need to be an SPA? What function does that serve when the standardized and infinitely more compatible form-with-a-bit-of-javascript approach works just as well if not better?

I work on one such project and it absolutely drives me nuts -- it's a rails app, but the customer front-end (which is literally just a form to fill out) is a React SPA. There is nothing there that couldn't be done with Turbolinks and some light JS for validations/popups.


Because that's the mainstream, that's what's easy to procure, hire for, negotiate with vendors, that's the default, the preferred, the future proof, the supported.

And the tech doesn't really matter. I hate React with a passion, because Angular is so much more sane - in my experience. But it's fine. It's mature, it can be made to perform completely well.

The tech doesn't really matter. The people doesn't matter either. Even the costs doesn't matter as much as people think. What matters is political will, procurement culture, so systems and structures. This will influence (and bring) all the others in line.


> the future proof

No, not that.


Depressingly correct.

Variations of "nobody ever got fired for buying IBM", repeated forever.


No-one got fired for writing an SPA, but we did decide not to renew the contract for the next major update.


When the only tool you know is a hammer you tend to just go around smashing things


How can a government contractor reasonably justify a price tag that's on par with fancy bloated non-government sites, for an no nonsense form + js approach?


Because they're a business that will charge as much as they can absolutely get away with and will do their best to make the project and maintenance costs as bloated as possible.

The question is why there aren't people in the government with either the technical ability or the political clout to audit and push back at any point in the project.


Costs the same as what you would otherwise pay, plus it actually works?


>plus it actually works?

Heh. "That was the most unkindest cut of all."

https://www.google.com/search?q=unkindest+cut+of+all+quote

Also, "the emperor's new clothes".

Both relevant.


> plus it actually works

That’s the problem. Maintenance is big money for government contractors.


It's really the people writing the requirements who are driving a lot of these things. Customers are used to Gmail, FB etc. and want similar experiences.


Because they want to program like they are mobile apps


This is a very simplistic characterization of what's happening.

First of all, for all the broken websites there are also a lot of websites that are not broken at all. It's also very easy to make a broken website using a completely server-side rendered website, and that actually happens often enough.

Second, SPA's decouple frontend and backend in a very strict way, which can bring enormous organizational benefits. Time-to-market is greatly improved, etc.

This whole "frontend vs backend" dialogue is basically white noise that completely misses the point. Use SPA or not, whatever, in the end it's just a tool to get the job done. Both are prone to errors when handled improperly.

A website that got it completely right is the Dutch corona dashboard called "Coronadashboard" created by the Dutch government: https://coronadashboard.rijksoverheid.nl. It's blazingly fast, extremely well-designed, looks great and the code is of exceptional quality. Also it's open-source, have a look at the code: https://github.com/minvws/nl-covid19-data-dashboard/.

The dashboard is completely written in Javascript. I truly believe a website with of such high quality would not be possible without frameworks such as React or Next.js (or whatever other framework and their respective tooling has to offer).

Closing note: let's try learn more from the websites that got it right than the ones that have failed. It's so easy to be critical, it's much harder to give some praise.


> A website that got it completely right is the Dutch corona dashboard called "Coronadashboard" created by the Dutch government

Not sure what everyone else here is using but you're right, at least for me the website is running buttery smooth in both Firefox and Chrome and the code is of exceptional quality.

> I truly believe a website with of such high quality would not be possible without frameworks such as React or Next.js (or whatever other framework and their respective tooling has to offer).

I agree. I wrote my first lines of HTML & CSS almost 20 years ago and back then JavaScript dev was nightmare. People wouldn't even have been able to create an interactive website like the Coronadashboard. (Of course we're not talking about static websites here – these were already relatively easy back then.) Nowadays, JavaScript dev admittedly still is a nightmare but there are at least tools like TypeScript, Angular, React and so on that make things a bit less painful and allow experienced web developers to create exceptional frontends. I'm putting "experienced" here because the frameworks still come with some pitfalls and bad practices are still very common. (I can't believe how many tutorials about using forms with React still recommend updating the state and re-rendering the entire form upon every.single.keystroke.)


> Time to market is greatly improved

Can you please clarify this?

If you are referring to a web app time to market I mostly see otherwise.

Here is an example: create a form to edit user settings. In Rails/Django it is pretty simple straightforward. But if you go React then you have an API and a component in React and have to think about routes and security of API and validations on both FE and BE.

There are advantages for SPA probably but time to market is not one of them in this case.

If you are referring to creating a mobile app and a webapp then maybe React with React Native is indeed faster. With this I agree. But I personally prefer to wait for Hotwire to launch their mobile framework and use that.


Backend and frontend are different species and require different skillsets. It's much easier to scale from an organizational perspective using SPA's, especially in companies where software is not the core product (think non-tech Fortune 500).


Yes, I agree with you that organisational scaling is easier when having BE and FE handled by different people.

I am not yet convinced that optimising for organisational scaling could be technical benefits.


Time-to-market is 99% an organizational problem.


> Time-to-market is greatly improved

I've experienced and have witnessed quite the opposite with SPAs


I'm not sure why you think this were written in pure js without a framework. It uses React and Sanity.


It kind of freezes up on my phone. Not intolerable but definitely slow.


It loads instantly on my Xr.


Why do you think it would not be possible to build a website of such quality without frameworks?

On a separate note, I think the dashboard is alright but I wouldn't call it excellent. It's a bit slow and some things, for example mousing over the map, are glitchy.


I honestly hate SPA's. They're not necessary in almost every single use-case, yet everybody is shifting their shit into one.


It’s because frontend devs are taught to use react, so suddenly every website unnecessarily becomes a react app. If every tutorial and class out there teaches people to use hammers (react) then people start using hammers to screw in a lightbulb or perform surgery.


I really believe it's a case of resume driven development. I'm sure there are counter-examples, but the whole move to SPAs, from my vantage point, has been driven by tech people and not leaders. Self-inflicted.


While RDD is a thing, the root cause is the stakeholders' requirements for more sophisticated and complex sites (whether smoother page transitions, more intricate effects, incessant demands to spy on every user interaction etc). It became harder and harder to do all this with server rendering and a rat's nest of jQuery. JS frameworks arose to meet these requirements and became an industry standard, and developers responded to market demand. It's all gone too far of course, but the stakeholders were as much to blame as anyone.


Yes, I agree for those cases where the SPA was created to match customer expectations. I'm talking about using it where it doesn't belong, which happens quite a lot.


You're making software engineers way more important than they are. They don't hold this kind of decision power except in the smallest companies. In most cases of SME and large enterprises, there is a VP, a director or sometimes a "chapter lead" pushing it (in my experience, 9/10 times they believe this push will get them a promotion, as they "helped to move the company towards an important new technology"); has nothing to do with resumes.


I've had the opposite experience in this specific case.


Why SPAs? They're good when you develop capabilities API-first. When we build features, we follow an approach of making the feature possible (customer support can exercise the feature with PostMan or cURL) then making it friendly with a UI.

There will be some tweaks and changes to the API to support the UI, but it's rarely drastic, and it ensures that every single capability we build out can be exercised by some other kind of program somewhere.

If you're building components of a larger system (which we do), SPAs and web components atop back-end APIs make sense. If you're building a one-off fill-out form kind of application... No those don't make sense. You don't even need JavaScript for those, if you degrade into just HTML + CSS for users that have shut off JS.


Even if you develop capabilities API-first, you don't need an SPA, though. You can create some backend code that consumes your API and in turn serves separate web pages to a front end.


what concerns me more than that is how government sites require google and the like (captcha and analytics at the very least), giving them opportunity to hoover up all of our sensitive data in one convenient place.


Yes definitely.

I was surprised recently as I had to fill in a massive form based site for UK NHS mental health survey stuff. The site appeared as a flat background old style early 2000’s sort of thing with bits of comic sans in it. I nearly died when I first saw it expecting a shit show.

But it turned out to be responsive and fast. It worked perfectly from end to end and had little to no JavaScript. It was by far the best thing I’ve used for years. There were over 100 page transitions in total. It wasn’t an SPA but a classic web site with little or no intelligence. Seemed to be backed by python.

I want this back.


> The site appeared as a flat background old style early 2000’s sort of thing with bits of comic sans in it.

That's surprising - even external stuff usually has to follow the NHS's design system.

https://service-manual.nhs.uk/design-system


> early 2000’s sort of thing

Right. It probably IS an old site and they've had years to iron-out bugs.


Nope. I spoke to the woman who issued us with an account and they just had it built for them.


SPA = single-page application


Thank you! It is hard to search for that term :)


Completely agree. And yet, were there ever better days? In the past simply fewer websites were online. I think a bad website is better than none.

At the same time the real question is why lots of governments wouldn't need to reinvent everything. That is maybe the original sin.


At least regarding the US federal government, it was a known issue that (a) the government wasn't willing to spend enough to hire quality software developers and (b) the government bureaucracy lacked a process to determine whether web software was "good." Specifically, no effective methods of software validation based on modern best practices (the processes they used were descended from the validation processes for material acquisition, which look extremely different from software engineering).

I'm not sure if the problems have been fixed, but they were both recognized and a process was put in place to address them after the healthcare.gov debacle.


> If reddit or nytimes has a bloated, intermittently failing SPA site, that's an annoyance. When it's a form to apply for unemployment, ACA health care, DMV, or other critical services, it's a critical failure.

Unfortunately the incentives are backwards. Typically governments have to choose the lowest bid. Those private companies you mentioned can make more complex tradeoffs.

Sometimes, of course (cough Florida unemployment and covid tracking sites), failure in performance is by design.


The New York Times used to be a mess on mobile where swiping back was a 5 second wait until the SPA synced. It is much better today, so either they ditched the SPA or have now fully simulated a browser’s back and forward behavior.


The founder of SvelteJS works at the NYT. SvelteJS works well on underpowered devices.


I went to go redesign my blog recently. I was intending to use the Go Template engine, but I'm not a frontend person so I wanted to use a frontend framework. I quickly discovered that most of even the purest of css frameworks did not have instructions for what I wanted to do. They all used Next.js, Webpack, etc and we're designed to be used in Vue or React. I love component frameworks for their simplicity, but everything that powers them is complex.


> Such sites should be using minimal or no JS. These aren't meant to be pretty interactive sites, they need to be solid bulletproof sites so people can get critical services

Government services were slow and unreliable before computers. The problems aren’t technological.


this isn't just a problem with SPAs. The quickly-built vaccine registration site I was using kept giving me 500 errors from Heroku.


This is the exact opposite of my experience.

When I click on a link, or click to submit something, and I realize that it's NOT an SPA, my immediate thought is "this is going to be a nightmare."

When I click a button, and it has to make a request to load the next set of html, fully replace the page contents, and probably submitted a post request that will have issues restoring the state of my form if I go back, etc, I feel like I'm on a DMV site from the 90s. Navigating something that is meant to operate as a cohesive application by instead using a series of markup displays is only ever going to be hacky at best. I love using SPAs, because they're actually applications, rather than snapshotted frames of an application running on a remote server.


I exclusively write SPAs these days and I think they are vastly superior for many reasons. The problem is that you can build a shitty product with good tools.

One of my duties is performance improvement, so I’m very familiar with problematic architectures. You can have an instantly loading informational SPA... I tend to use Gatsby for that. For more interactive sites, I prefer vanilla React with lightweight libs, code splitting and sensible caching rules.

I do agree that poor design, reluctance to refactor and lib/tracking heavy apps are very problematic. Isn’t that something that’s always been a problem in Webdev?


I'd like to add a bit more nuance. Modern frontend development provides many opportunities for failure. These failures often make their way into production. I, personally, get great results with modern FE development. My users are happy. I am happy. It's all very successful. All the defenders of modern FE development will likely chime in with the same sentiment. I also get great results with C, which arguably provides even more catastrophic opportunities for failure. I wouldn't say "C is a failure", but I absolutely hate working in C, and I'm glad we've made improvements to systems programming. I think the point I'm moving toward is that we absolutely should strive to make better tools for delivering web content, but the reasons for failure are far more complex than the tooling itself. So, let's not frame things in black and white and call something that's been used with large amounts of success a failure. That's not very productive and is just flame bait. The real question is, "What's next?", how are we going to improve the current situation?


I love modern frontend development. I can build apps that scale easily to hundreds of thousands of users. They are fast where they need to be fast, and building components means complexity lives only where it's needed. Static parts are rendered statically, dynamic parts are rendered dynamically. I can write all code for the entire stack in Javascript. The entire workflow is streamlined in a simple way (webpack really isn't that hard to grok). There are templates for every project imaginable. I don't need an entire VPS for my web-apps, just a simple static hosting service. My apps run on every system and any browser back to IE9 without any issues whatsoever. No complex build tools or linking process or debugging weird architectures needed. Compatibility issues are solved automatically in any modern build pipeline by adding vendor prefixes. New code is transpiled to old code for older browsers. Debugging modern front-end code is a breeze, there are an infinite amount of tools both for styling and the code itself. My modern frontend app takes seconds to build; QT apps I built in the past needed up to twenty minutes, just to check a simple change. No need for that with hot reloading. My users are happy too: They don't have to download an app which might not work on their specific system. Linux, Windows and Mac users alike can use it on their computers, laptops, tablets and phones without any issue. They can use the default environment a browser provides for an even better user experience (zooming, Screenreaders, upscaling fonts, copying images and text, preferences for dark mode, theming their browsers, saving credentials and auto filling them, sharing stuff, etc.). Integrating stuff from other companies has never been easier: Paypal, Stripe, Apple Products all provide a well tested and documented library for every framework out there. There are open source packages for anything, it's a FOSS dream. Building prototypes for anything is insanely fast, thanks to modern debugging tools in Chrome and FireFox.

It's much better than people make it out to be.


> build apps that scale easily to hundreds of thousands of users

I don't understand this.

Frontend dev is about writing a portable software to run on as many runtimes as there are users. There is literally nothing to prevent any frontend software, good or bad, to "scale", because scaling in terms of users is nonsical for frontend dev (unless you consider browser compatibility as scaling, to which statement I'm orthogonal).

Note: meanwhile, back end dev is about writing a program that accepts as many users as possible, which make front and back nicely yin-yang together. But maybe I'm overstating things here :shrug:


Doing most of the work on the client instead of the server can often make scaling much easier.

I read your comment as assuming that the division between the server's job and the client's job is fixed, but as the browser has become more powerful many things have moved to the client.


Doing the work on the client can sometimes be the right choice, but I'll share a converse point. I was tasked with implementing a bulk record upload feature. The UI was to accept up a CSV file with up to 1 million records. The API provided by the backend team accept max 1,000 records in JSON.

So now I have to convert CSV, for which there's no official specification, to JSON, and then make (up to) 1,000 HTTP requests via HTTP 1.1 -- and a different back end team owns rate limiting for all ingress. I end up using a package called 'promise-limit' to ensure only N number of requests are open at a time; firing 1,000 simultaneous XMLHttpRequests is a bad idea. I end up fielding bug reports concerning the translation to JSON for edge cases. And collating the 1,000 network responses for any individual record that failed to persist was something we didn't get to building. So users were uploading 1M records, had some proportion fail, and had 0 visibility into what data the system had, and what data didn't make it.

Eventually, we gutted all of that and the backend team added a file upload endpoint. But explaining why this shouldn't be a front end feature, to folks who primarily or exclusively do backend work, it's tough to communicate (1) all the pain points of dealing with the complexity in the front end and (2) the scalability issues. "Can't you just do it on the front end" is such a frequent retort I receive, for things we truthfully should do on the backend.


The argument to make to the backend team is a bit nuanced but it’s this: Generally each PUT/POST sent to the backend should have atomic semantics. If there’s some user action (like uploading a CSV) that should entirely succeed or entirely fail, you want to send it in a single HTTP request. If it matters that it happens exactly once (like submitting a forum post), then adding a nonce of some sort is a good idea too - just in case the request succeeds but the response gets dropped and the user retries the action.

It’s often tempting to model URLs like database entries and send multiple requests to the server to model updating many records based on one action. That’s a mistake - have the client name the action, send one http request, and execute it in a single database transaction in the backend if you need to. This approach is way more reliable, debuggable and consistent. This sort of thing is absolutely the backend’s responsibility.


If you backend team doesn't hear "I might have to make 1000 requests" and immediately think "oh that probably times out" and "oh do we have to re-batch that data" and immediately make a file upload endpoint, they're committing malpractice. Good grief.


"committing malpractice" is a very generous way of putting it.


That's interesting. From my experience, the backend guys are usually more down-to-earth and often save the frontend guys from doing overcomplicated/bad stuff. Not wanting to implement a file upload endpoint for that task sounds suspiciously like some unmotivated guy that doesn't give a sheit about the app.


Your backend devs are shirking responsibility, the front end shouldn't prep data, it should handle input and do rendering.


> Doing most of the work on the client instead of the server can often make scaling much easier.

I've also found that it results in a non responsive webpage. For instance, text taking multiple seconds to render in an input box (Slack, Facebook, amongst others) and times where the browser tab process ends up allocating a signficant percentage of system resident memory and/or CPU resources, which makes the computer itself slow to respond.


> Doing most of the work on the client instead of the server can often make scaling much easier.

Spitting out html from the server is incredibly easy to scale.


Aka 3 tier architectures back in the day....


xamber's point remains valid though -

while code may move to the frontend, the notion of scaling isn't a function of the number of users (as the original op erroneously states).

maybe they meant their front end code scales as complexity is moved from server to client? that would make sense.


I think gp meant that you don’t have to send a server request to handle every user interaction. The server has to handle much fewer user requests, and only return data (instead of generating the template as well).


Frontend dev usually includes dev of backend components that serve the frontend. Even serving static html files has to scale to every user.


They are fast only on fast and expensive user equipment:

https://infrequently.org/2021/03/the-performance-inequality-...

More context in older post:

https://infrequently.org/2017/10/can-you-afford-it-real-worl...

> Partner meetings are illuminating. We get a strong sense for how bad site performance is going to be based on the percentage of engineering leads, PMs, and decision makers carrying high-end phones which they primarily use in urban areas.


This entire read seems to be that things are much better for YOU, the front-end developer.

And yes, that makes a difference, you can deliver more features, faster, more reliably.

But, if those sites just bog down and take double-digit seconds to load, even when properly deployed on scaleable delivery architectures, with fiber-optic speeds, and CAD/gamer-level power machines, they are junk.

And I increasingly see exactly this junk been over the last few years. Even (and often especially) the major sites are worse than ever. For example, on the above setup, I've reverted Gmail to the HTML-only version to get some semblance of performance.

Sure, some of this could be related to Firefox' developments to isolate tabs and not reuse code across local containers and sessions, but expecting to get away with shipping steaming piles of cruft because you expect most of it will be pre-cached is no excuse.

Your site might have the look and features of a Ferrari, but if it has the weight of a loaded dump truck, it will still suck. If you are not testing and requiring good performance on a rural DSL line and mid-level laptop (or similar example of constrained performance), you are doing it wrong.


I cant agree more. I could cry. Ive been on a crusade for over a year to scale back diddly js frameworks because 'scale'. Ive heard many bullshit excuses before but sacrificing performance and reliability for being "scalabable" is beyond mind boggling. It gives me a panic attack when medium articles are the chief architect and purveyors of software development truths. I live and work in developing countries and could never understand why sites keep getting slower around 2017 before I stumbled into medium with their React and K8s articles ad nauseum. I realised the juniors and intermediate devs are reading these things as truth and applying it as gospel.

My head hurts just thinking about react.


Yup, all these rapid-dev frameworks and tools may be fantastic for rapidly developing and refining a prototype.

But that is just the beginning of the process, the job is barely half done.

The devs must then focus on exactly what is needed in the code to refactor and strip out every unnecessary instruction and byte of download.

Page load times and refresh times that are more than 3-digit milliseconds rapidly become unusable. When it is 5-dicit milliseconds, on a modest machine, you are shipping junk, and should not consider yourself a developer. Amateurs can do better.

Wow, do I miss the days of the 5KB web page competitions...


Your sarcasm is well and truly appreciated but misplaced.

The idea is that these libs and framework make an honest attempt at making web apps. Also well intentioned but wide off the mark. Most people access the internet through mobile phones[1] with dodgy broadband connectivity, hence don't need an app running inside another app - do you understand the redundancy and the pitfalls of janky, horrible UIs for the marginal performance improvement? Or the famous It doesn't reload the whole page...very few people actually care about.

If you are building the next instagram, go right ahead and use bleeding edge js frameworks, but not every site/blog needs to be an app.

[1] https://gs.statcounter.com/platform-market-share/desktop-mob...


Ha! I think we actually agree pretty strongly.

Except that I'm not even sure that the next Instagram or whatever should be a web app - it should be a native app.

I saw the beginning of the "write once - run anywhere" concept decades ago. Sounded fantastic. Yet here we are and it is more like "write once - stumble and crawl someplace".

It is most discouraging that many major houses do it so badly, and even their native apps are similar bloatware...


How would you personally go about building a sophisticated web application?


By first figuring out if I need to build a web application.


Got to agree with everything but

> Debugging modern front-end code is a breeze

Is still hit or miss every once and while depending on the use case. For example debugging service workers is still a bit of a nightmare. And I've also had some issues debugging a chrome extension written with Vue CLI + the browser extension plugin.


This is sadly true, but service workers are themselves hard to get right, and they tend to create subtle cache issues that are hard to notice until the deployment after their deployment to production.


I'd argue service workers are a pretty rare thing to debug


I guess you are a talented developer working in the right environment. But the vast majority of sites outside there (not controlled by some tech behemoth) are usually not optimized, work on Chrome, usable only by abled people and a long etc etc


This. People see one shitty implementation of SPA and put every new tech stack in the same bucket. We live in a golden age of tooling and the front end is right at the front (pun intended).


Whenever this topic comes I only hear praise from devs. Devs only care about what they can do, never thinks from user's side. That's why we have shitty js infested sites, people who thinks all people have latest gadgets, sits in ac rooms, have unlimited fibre net just like them. Devs live in a bubble. It failed users.


There are many reasons why having SPA and rendering your site on the client is a bad idea.

First, you are basically breaking the concept of the web, a collection of documents, not a collection of code that must be executed to get a document. That has many bad effects.

Browsing is slower, you have to download the code of the whole application and wait for it to execute other API calls to the server before the page is usable. That can take a couple of seconds, or even more on slower connections. With the old pages rendered server side not only you didn't have this effect, but also the browser could start to render the page even it was not fully received (since HTML can be parsed streamed). Not everyone has a fast connection available at all time, and it's frustrating when you have only a 2G network available and you cannot do basically anything on the modern internet.

It's less secure, since you are forcing the user to execute some code on its machine just to look at an article in a blog. And JavaScript engines in browsers are one of the most common source of exploits, given their complexity. Also JavaScript can access information that can fingerprint your browser to track you, without particular permissions. Ideally in a sane world most of the websites don't require JavaScript, and the browser would show you a popup to allow the website to execute code (just like they as you access to the camera).

It breaks any other tools that are not a browser. In the old days you could download with wget entire sites on your hard driver to consult them offline, with "modern" SPA is impossible. Of course that implies that it breaks things like the Wayback machine and thus history is not preserved. And serach engines penalizes sites that are not a clean static HTML.

It's also less accessible, since most SPA don't respect the semantic of HTML, everything is a div in a div in a div. And of course you still need a browser, while with HTML documents you could process them with any software (why a blind person need to render the page in a browser? While a screen reader could have simply parsed the HTML of the page without rendering it on screen...). It breaks navigation in browser, the back button no longer works as you expect, reloading a page could have strange effects, and so on. I can't reliably look at the address bar to know the page I'm in.

Finally it's less robust. SPA are a single point of failure. If anything goes wrong the whole site stops working. While a bug on a page on a classical server side rendered website breaks only that particular page. Also error handling is not present on most SPA, for example what happens if an HTTP request fails? Who knows. Look at submitting a form, most of the time there is no feedback. On a classical server side rendered application if I submit a form I either get a response from the server, or the browser informs me that request had failed and I can submit the form again.


>First, you are basically breaking the concept of the web, a collection of documents, not a collection of code that must be executed to get a document

I think this ship has sailed a long time ago. Nevertheless, I still think that current SPA frameworks are overblown. I am partial to Javascript/HTML + WebComponents. However, I put the blame into the influx of "coderz" that in the last 10 years have arrived to do FrontEnd development.

It is very similar to the use of thick frameworks in the backend (LoopBack, Ruby on Rails, etc): There's people who do not have a clue about what they are doing but they still build SPA websites by changing some code here and there after an "npm create-react-app-shopping-cart" script.

I've interviewed hundreds of "Frontend Developers" and for the majority of them, once you start entering into the concepts of HTTP headers, why do we have CSS? why do we use font-icons and image sprites, CDN, JavaScript closures, JavaScript this/that issues, let/var differences, etc, they stare me with blank eyes.


Some of these stated issues aren't universal. There are frameworks that pair a server side rendered page with a switchover to SPA. Providing the user with a pre-rendered page with a follow on JS bundle that loads the SPA and the switch is seamless. It's a bit of extra work so I don't always use it, but it exists. This also removes the incompatibilities with non-browser tooling/bots/etc. (Example: quasar.dev)

2G is being phased out (at least where I live), it no longer works here. Chromium browsers have a slow-3g network effect to test your code/sites. I use it frequently. Sometimes it reveals problems, other times it's a shrug. Some features just require bandwidth and there is no reasonable way to avoid it. I do try to reduce page weight as much as I reasonably can.

Lots of features of the modern browser will leak finger-printable information. The JS engine is hardly the only problem piece. I guess your general complaint here is to indicate the world is insane. I tend to agree, but I don't think this is the principle reason why :)

Screen readers are notoriously difficult to make work correctly. In my experience, this is a universal issue with the web generally and isn't unique to SPAs. If we're serious about audio accessibility we need an actual standard instead of this guesswork.

Breaking navigation is a pet peeve of mine too, poorly behaved front-end routers are terrible. I always target working urls into any part of the app as refreshing the page is a common operation. And I must have a properly functioning back/forward for my own sanity in development if nothing else.

I will beg to differ on the point of failure. SPAs can also be apps, paired with a service worker and a local data set they can continue to function even if you are completely offline, or the server is completely offline. Obviously if your web app is just broken, yeah, nothing's going to work. But in the case of SSR, that's also true if your network is down (locally or remotely), or the server is down/broken.

A well built offline-first SPA will reduce the total number of single point failures that kill all functionality. Also, offline-first SPAs have the side effect of masking some of the latency over a slow 3G link - which can improve the user experience significantly.


Interesting take that I more agree with than disagree - However I think " there are an infinite amount of tools both for styling and the code itself."

Is actually one of the symptoms of the issue at large I think.

I understand some sites depend on more complex APIs and such - certainly fbook uses the complex stuff that these frameworks do..

However many people are using wordpress and similar tools simply to have a basic static few bits of info from a few pages display decently on various devices. I too am guilty of such a thing on more than one occasion.

For these cases I am moving towards converting the designs to static html and using a php based script from coffeecup to handle the contact form part that in the past I've lazily just added a two click plugin from wordpress to handle.

I feel that the CSS standards group really dropped the ball with not having better responsive menus 'built in' as a part of the problem that can be solved in the future. Now that grids are built in - the menus that bootstrap does auto-magically for most devices is the missing piece that keeps many sites from being just html/css.

I'd love to go back to netobjects fusion for design, but it has not kept up with the responsive needs of the web. I tried coffeecups site designer in beta and it wasn't for me. I've built dozens of sites using notepad++ and occasionally load up sublimetext for find/replace - but still feel that more visual / wysiwyg type stuff is greatly desired by much of the world.

Wordpress is going more in that design direction as the gutenburg blocks start to work with full site design and not just page content. And I still keep meaning to take the time to try out pinegrow site builder - as that might be the replacement for netobjects that I've longed for.

But it's not just me - there are plenty of people who could / would make a site and find things too complex today - about 7 years ago I found someone in the top 3 search results for a home service - and inquired about their designer / seo .. The guy who was there doing the work told me he made the page in microsoft publisher.

While I'm not advocating for the bloat that Msoft's frontpage pushed into the world, and I know the time of clearpixel alignment is a distant memory, even though we have an infinite amount of tools - it still seems the front end world is more complex than it needs to be.

It is better in some ways and worse in others. I hope CSS gets better menu options so there can be less pieces needed for a decent puzzle. I like non-js sites just fine, and building with less tools is okay too.


I agree entirely. The worst sites are imo the ones not using modern frontend technologies.

I think people are forgetting how bad it used to be. Loads of jquery spaghetti code everywhere rerendering the page 1000 times with a nested for loop.

Also, web applications have became so much more complex than they were (but still work!). Things like Figma would be unthinkable even a few years ago. And - even though it's running in a browser Figma feels far more responsive than Illustrator or Fireworks (RIP), plus it has a load of collaboration tools which these desktop apps don't have.


What aspect of Figma do you mean? Placing elements on a canvas has been possible almost forever...


Figma makes heavy use of WASM, WebGL and Web Workers it’s much more complicated than just rendering some shapes on a canvas with a scene graph.


We have to distinguish websites from web applications.

Websites have no reason to use JavaScript. They should be a collection of documents, possibly generate dynamically on the server side, but to the user HTML+CSS that can be viewed in a browser, downloaded with curl, viewed as source code, etc.

Web applications have no reason to exist. Basically we are turning the browser into a JVM, a way to run code portably. But this have nothing to do with the web! It's just bytecode loaded from a remote server. What Figma, or GMail, or Google Docs has to do with a website? Nothing.

These things shouldn't be implemented in a browser, it's not the right place. These problem should have been solved at the operating system level, but it was too difficult for Microsoft to agree with Apple and the open source world on a portable operating system interface to made possible universal binaries among platforms, with maybe a bytecode, maybe a way to load code from an URL.

No they couldn't agree, so they took the only really open standard, the web, and bloated it, basically making it an operating system (well, Google even made it really one!), what's the point? That now most people run Windows or macOS with on top a browser (probably Chrome) into which they run 95% of the software that they use. Does this make sense? To me is a huge inefficiency, that only makes things slower, slower than they were in the 90s to me (even if we have order of magnitude more powerful hardware using the GMail web interface feels slower than what it was using Outlook on Windows 98).


>Websites have no reason to use JavaScript

Put another way: "websites have no reason to execute code". Is a document with an embedded form a website or a webapp? what if one of the inputs is custom and requires some custom validation logic is the expectation from your train of thought that this logic must only execute on the backend?

Websites are interactive documents and they have been since the inception of the hyperlink. I struggle to draw a line where a website becomes a web app outside of the obvious examples at far ends of the spectrum.


As someone who doesn't do neither front-end nor back-end development I think this comment does the best job in this thread summarizing pitfalls I've observed as a user.

Although you probably (?) want to add that OS maintainers are slowly turning their repository of available OS-level applications into walled gardens, which I could see may push some developers to currently prefer web application implementations.


The web is the closest we have to an open universal platform. If you want the most bang for the buck as a business in developing widely accessible software, you build a website/webapp first.


That might change the limit on how big the UI can be, but it changes nothing fundamental. As far as I'm aware of what Figma can do (we use it at work but I use it seldom directly), you could implement the same thing with DOM elements or simply with SVG. It's not fundamentally different to the Pipes UI I did implement with Raphael.


> you could implement the same thing with DOM elements or simply with SVG

In the same way that writing a REST API is "connecting to some DB and simply exposing data via some endpoints" or building a machine learning model is "simply throwing some data at some algorithm and getting a bunch of analysis out" or devops is "simply starting some remote servers" or creating an OS is "simply writing a bootloader and then some stuff on top."


No. In the way that having written UIs that go into that direction, adding refinements to them and adding performance improvements does not change the base characteristics of what has to be written.

Remember, my comment was a response to the claim that something like Figma could not be done before the current frontend development complexity mania. But there is nothing in that collection of bad tools that is necessary to write a canvas based UI like that. It could have been done before and it most likely has been done before.


>Modern frontend development provides many opportunities for failure

I think this is the important thing here. Everything feels less stable, and more prone to breaking, on the modern web. You write some simple HTML, style it with CSS, and write vanilla JS for the parts that need it, and everything feels solid. You start a new project with a framework, and it seems like there is this whole area of your project that is essentially a black box, ready to break (or misbehave) any time.


At some point, there is an irreducible amount of complexity. At that point, adding things like over-wrought frameworks to “make things easier” for the developer ends up pushing the complexity around, like an air bubble trapped under a screen protector.


There's necessary complexity, and then there's extra complexity.

IMHO, a lot of frameworks just add extra complexity. Now that we're in the twenties, you don't need nearly the level of browser compatability shims like at the turn of the century. Sure, bleeding edge features need that, but most sites don't need any of that. Certainly news and shopping fit well enough with turn of the century browsers, so everything else is fluff (and retargetting, ugh).

It's the same with a lot of server side frameworks; especially PHP frameworks. I've never been able to grasp what a PHP framework can do that PHP can't, other than burn 50ms of cpu before outputting anything.


Yes, we now have what everyone fought for in the 00s - browser support for web standards - as well as full ES6 support yet here we are with front-end devs falling like flies from burnout. It's like CPU speed bumps - all we do is just invent more complexity to whittle away the gains we've made. Makes me just want to revert to procedural PHP and jQuery/vanailla JS.


> You start a new project with a framework, and it seems like there is this whole area of your project that is essentially a black box, ready to break (or misbehave) any time.

Any framework you're not familiar with will feel like this. This isn't something unique to frontend frameworks.


> Any framework you're not familiar with will feel like this. This isn't something unique to frontend frameworks.

But the frameworks get out of fashion faster than you can obtain deep knowledge of them.


What framework was popular 4 years ago and is now unmaintained? This is a common remark that was true maybe 10 years ago but hasn’t been in quite a while. React, Angular, and Vue all have dedicated maintainers and millions of users, and have for several years now.


This is starting to no longer be true. We've held Angular/React in the top spots for a while now. Vue is probably next in line.

However, both front-end and back-end are substantial enough now and have enough frameworks and environments that it's really hard to do well at both. I'd much prefer more specialization, which would hopefully lead to higher quality.


i am not sure there are true top spots in the javascript world.

in my circles more and more people look at svelte to try to get away from frameworks altogether.

new tooling is always just around the corner: there is a webpack killer annonced every couple of months.

this very site is full articles "why we switched from/to react", etc.

in the javascript community "stability" is anything with a bit less churn than the bleeding edge. i think it will result in a massive burnout for a whole generation of developers.


Being concerned with a technology "falling out of fashion" is a personal problem not a problem with any framework. In any case, this is not a concern when using mature technologies which are the ones you should be using.


This is a really rose tinted view of the world. Writing JavaScript works ok now the language and browser support has matured. 10 years ago it was a complete nightmare.

Regardless without a framework it ends up with you reinventing core framework features anyway. Yes, a framework driven app is more complex if the page/app is extremely simple. But if it is of any complexity a framework driven app is going to be far easier to maintain and reason about in 99% of situations.


Your comment is a common retort, but I think it highly depends on what I'm building. If I'm building something simple, like a CRUD app, I really don't need a framework and won't end up reinventing core framework features.

Give me basic templating and server-side rendering and I'm good to go.


This is partly true, but I would also like to state things have much improved. I can vividly recall the 2000s where practically every website would look and behave differently in different browsers. I recall a lot of debugging client-state using poorly scripted PHP websites. Tooling so much more advanced and things "breaking" have moved from the user to the developer. In the jQuery days (or even before that) it was very hard to find out if everything was working as it should. Early feedback is a good thing. Understanding a framework is part of understand what you are shipping to the customer or user. Sure, it might seem more complex, but at least complexity isn't pushed down to the browsers and users.


> I can vividly recall the 2000s where practically every website would look and behave differently in different browsers.

That was a time before the web became more standardized as an app platform. Corporations like Microsoft with .NET, Sun with Java, and Adobe with Flash, were all competing take over the whole world of write-once-run-anywhere frontend apps, become the platform of choice, and lock everyone into their way of doing things. Also there was no distinction between desktop computing and mobile computing as mobile computing back then was still underdeveloped.

Now the web is the frontend platform of choice. Ostensibly the platform isn't owned by anyone but the WC3 consortium (which is good), but practically speaking, Google won since everything is WebKit now (which is worrisome). And this has not yet so far solved the problem of the web being built on an incredibly poor programming model that make everyone want to use virtual-DOM frameworks and CSS preprocessors.

Hopefully, web assembly will replace the status quo and let people develop front-end apps using better programming languages and frameworks than just the ones the compile to JavaScript.


You write some simple HTML, style it with CSS, and write vanilla JS for the parts that need it, and everything feels solid.

Bootstrap + jQuery on the front end, Flask or similar with CherryPy on the backend. SQLite or Postgres if you need it. That will handle 99.9% of websites, it will be cheap and easy to develop and host, and deliver a great experience to the end user.


> Bootstrap + jQuery on the front end

Why? CSS can do layout with grids and flexboxes just fine, while es6 plus the modern browser api are as ergonomic as jquery. Why pull these dependencies from a decade ago?


CSS can do layout with grids and flexboxes just fine

I never bothered to learn CSS is why. Bootstrap looks good and consistent with what users expect and requires almost zero effort to use.


You'll eventually need js. If server side renders html, now your client is coupled to server side both on api and on html. That's bad.

Now 1 component will be: server side html, client side html, css and js.


>That's bad

Nice generalization. Tell to that to the thousands of absolutely massive companies running on server-rendered HTML how they need to adopt cutting edge JS.


I think a lot of it had to do more with business requirements than art or science. The business folks stroll in and make these generic one-click deploy type things to get sites up and running quickly, at the expense of being vetted by good artists and scientists.

A good analogy I think is any person using MS publisher to create a book layout. Sure it can be done quickly because the software does the thinking; but should it? Good books have skilled typographers designing the proportions of the pages relative to text block, the fonts, et cetera. The end result is a book that one scarcely notices but feels very pleasant to hold and experience.

We need websites that are subservient to their content; sites we scarcely notice but thoroughly enjoy.

EDIT: one last point: I think part of the problem is also that browsers have an identity issue. E.G. they started as a static publishing platform, but now they are a full blown operating system with web assembly etc. As we know, anything that does too many things is inherently complex, and hence “bloated.”


Personally I think most of the "failures" out there are caused by organisational issues not technology issues. Namely, marketing departments given all the power over what the website does and what's bundled with it (trackers, a bunch of heavy/intrusive marketing tools), and optimisation that values increasing revenue above increasing actual usability. (Sometimes these align; often they don't).

You can make a SPA that's a joy to use and loads and runs extremely fast. Hardware has never been faster; browsers and application frameworks have never been as good as this (I'm talking about the web stack). It's really nothing to do with the tools or technologies that marketing insists it needs mixpanel, GTM, optimizely, hotjar, FB pixel, smooch, and god knows whatever else in order to do its job effectively.


You are confused about your comparisons because the largest reason for FE failure is bloated code, an unrelated problem to the dangers of C.


C is to Rust as Webpack is to ______.


Whats the business case for less JS? Recently the company I worked for started a project where it seemed pretty clear to me would be possible to develop with very little JS running on the frontend. But I couldn't convince anyone of it because no one saw enough benefit from doing it that way.


I think modern FE won over is primarily because it provides with Decorator OO, which has been recommended for GUI development including by the GoF for ages, which also matches perfectly with a tag-based declarative language such as HTML. As such, ditching the "HTML templates" paradigm was a no brainer for OO devs to start with. For many, "templates", their performance, their restricted mini language and all the boilerplate that comes with it, just had to die.

But then, with Angular/React/etc, you now not only have 1 project on your hands but 2 different projects that must be compatible: the backend and the frontend. These 2 different projects ought to be in 2 different languages, unless you are developing backend in JS too - maybe there's even an ORM that can generate migrations these days for NodeJS ! But that wasn't the case last time I checked, also loosing the ability to have server side rendering without the additional effort of deploying a rendering server with all that comes with it.

People ditched jQuery saying that vanilla JS was just fine, but it turned out not to be quite the case so there are still releases of jQuery, and at the same time NodeJS was released and npm and then Angular/React/etc which in my opinion created with two goals in mind 0. having OO components for GUI dev and 1. offer IoC to overcome the difficulty of dealing with custom component lifecycle like in jQuery which leaves you to monitor the DOM and instantiate / destroy each plugin by yourself. Idk if there are other reasons, but it seems to me that apart from that, that DHTML is still pretty much the same: you're adding some logic to change tags and attributes, anyway.

Today we have a chance to break these silos again with the Web Components and ESM browser feature and W3C standard because it does solves elegantly the problems that React/etc seemed to be designed for and do no impose a frontend framework, you can just load the script in your page and start using <custom-tag your-option=...>... The browser will manage the Component lifecycle efficiently, and devs can use it in their templates without having to load a framework so everybody wins.

This is also a chance to simplify our code, levitate by throwing away dependencies. Of course if you want to do full client side navigation with an API backend you still can, but you can make generic components that you will also be able to reuse in other projects that do not use a particular framework. You need no tool to make a web component, this is an example of a jquery plugin that was ported to a web component which has no dependency, with browser tests in Python: https://yourlabs.io/oss/autocomplete-light/-/blob/master/aut...

Webpack does a lot, still slow for development be we're seeing the light with Snowpack and esbuild, allowing to have webpack in production only (ie. to generate a bundle in Dockerfile) and benefit from actually instant reload thanks to ESM.

So if you go for web components and snowpack, you get an extremely lightweight toolkit which I love that will work for every page that's not an overly complicated web app. But then I thought I actually don't have so much frontend code, it would be nice to have it along with the backend code, so we went for a python->js transpiler to develop web components in pure python which also replace templates, it was surprisingly fast to implement: https://yourlabs.io/oss/ryzom#javascript

Is this improving the situation or not depends on your POV, heck I'd understand if you even hold it against me, but the frontend development tooling is still evolving for sure, and I can see how the browsers are doing efforts (except Safari) to simplify DHTML development, because:

“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.” – C.A.R. Hoare, The 1980 ACM Turing Award Lecture


If you adopt microfrontends, you can ship your app as N number of smaller bundles, instead of one monolithic bundle. The Webpack performance in this setup brings me joy. It's not 10ms to bundle, but 2-3 seconds cold boot in dev mode is completely acceptable, with rebuilds nearly instantaneous.

This setup is described in more detail in single-spa's documentation.


Exactly, you can see that the webcomponent I linked is microfrontend. Interestingly, I first ported the code to StencilJS, but then it seemed the framework actually got in my way and I removed it, I prefer the code as is now and there's not even a build because it's vannillajs. You just add the script tag in your page and start using the <autocomplete-light...> tags with your options. I used the mdn custom element documentation without any framework.


I like what I'm seeing in Web Component technology, but it's not fully there for me yet.

Suppose your base Design System is web components. Most Microfrontend (MFE) teams will still pick up an off-the-shelf React, Vue, Angular, etc.

But VDOM and web components don't always play nicely. If you have Parent-Child web components, specifically 1 parent with N-immediate children, React can't resolve this gracefully. Every render past the first breaks with Stencil. That's a pretty significant issue, because (very) common components like Select dropdowns are modeled as 1 parent with N immediate children.

So you can't any longer stay in React's declarative model. The workaround (last comment in issue) has you writing imperative code to fix the problem.

It has that "using D3 in React" vibe to it. It works, but it's not nearly as elegant as if you had, say, a React-based Design System (but then, of course, all your MFE teams must use React).

https://github.com/ionic-team/stencil/issues/2259

Users have also reported the same analogous issue exists for using Web Components in Vue too, so it's not even a React-specific shortcoming.

https://github.com/ionic-team/stencil/issues/2259#issuecomme...


I don't think it matters so much for React/Angular/Vue to support web components because they will be dropped in favor of vanilla js just like jQuery has, thanks to Custom Element which is recommended for MFE, and ESM which will likely be. Mixing different frameworks is a promise that I don't care about, but you got some pretty interesting feedback about it, seems like it might as well take a few years of effort.


> My users are happy.

Are they? I'm skeptical we can know that for any given website; specifically, accessibility is a huge pain point for so many sites, and it would seem difficult to know how many people are turned away from a site because it isn't set up to serve them.


Can't agree more on this...


Regardless of my tweet, that is a personal opinion based on my feelings, background and experiences, the discussions in the replies are interesting from the POV of understanding what different people think in the programming community. Some will say that critiques are only from old people, in the style "when I was young we had only zeroes! You now have ones and zeroes". Others say that the web is slow because of ADs and trackers and modern frameworks would otherwise be fast. There is who believes that the problem comes from big companies imposing new standards in order to turn the web into the new Java applets (a full fledged application platform), and so forth.


I read it and feel it's the same as how we perceive modern music. Everyone says music "used to be better" but that's just survival bias. Just like music, there was a LOT of trash web development back in the day as well. Sites built with tables in dreamweaver best viewed on netscape navigator at 800x600 with animated gifs bogging the download speed existed long ago.

What we have today are the same problems with a new wrapper, created by a not dissimilar group of people from the past.


>I read it and feel it's the same as how we perceive modern music. Everyone says music "used to be better" but that's just survival bias.

Well, if the top 100 tracks from, say, 1961 to 2021 progressively get less musically diverse, with simpler chords, less harmonies, less timbral variety, lesser melodies, more repeatition, less dynamics, less genre variety, more infantile lyrics (something that has been studied and measured several time, e.g: ), etc, then it's not some "survival bias".

https://www.researchgate.net/publication/346997561_Why_are_s...

https://newatlas.com/pop-music-trends/23535/

https://www.nature.com/articles/srep00521

https://www.res.org.uk/resources-page/economics-of-music-cha...

https://pudding.cool/2018/05/similarity/


To be fair, you have to adjust for the relevance of top 100. In the past the top 100 was the only thing most people were exposed to because that's all the radio played. Today Spotify and Youtube and their recommendation algorithms make the top 100 almost completely irrelevant.


The size too: there are far more songs released now than in the 1960s. So it makes sense that the "creme" on the top is more homogenised.


Why does that make sense? More available music should imply a more varied range of good music to rise to the top.


It happens in some simplified models. Suppose a thousand tracks get released every year, and these tracks are evenly divided among 50 different types of music, 20 tracks per type. Some of these types of music are very popular (4.95 stars), some are slightly less popular (4.90 stars, maybe because they offend some powerful group), and some are much less popular (4.00 stars). So the most popular 100 tracks will be the 20 tracks from each of the 5 most popular music types.

If we increase the number of tracks to ten thousand, there are a couple of ways we can go. We could increase the number of types of music to 500, and in that case, we'd see better music rising to the top. (Or at least more popular music, which may or may not coincide with being better.) Or we could increase the number of tracks per type to 200, in which case a random half of the tracks of the most popular 4.95-star music type will be the "top 100" for the year.† Or we could go for the middle of the road: maybe now we have 150 types of music and 67 tracks of each one. Or we could have musicians and record companies that respond to incentives by trying to produce more music of the popular types, somewhat handicapped by the fact that those popular types change every year, and however much Nickelback might try it, recording the same song under twenty different titles doesn't actually give you twenty top-ten hits.

Regardless, there are a lot of ways that more published music could both provide more variety and less varied top hits.

______

†Of course the Billboard Hot 100 is per week, not per year, but that's the least of the oversimplifications in this model!


Not necessarily. That might be true if the top X you're tracking is also expanding along with the size of the catalog, but isn't true if X is fixed, like a top 100 music chart.

More common variants of more popular genres could easily crowd out moderately popular genres. The long tail has been a long noted issue at Spotify with several attempts at fixing.


It reminds me of why Apple Pie is America’s favorite pie.

50-100 years ago when parents were deciding what pie to make or buy, parent’s favorite was X, eldest kids favorite was Y, youngest kid’s favorite was Z.

Parents instead bought Apple Pie, because it was in everyone’s top 5 favorite pies.

Top 50 is an average of all of the world wide listeners. No one’s favorite songs are in the Top 50, but they are the average top 50.


The articles you're linking are proven to be misrepresenting.

They for example look at a really small subset of music "Million song dataset" and only analysed basic metrics that could be automatically measured. To be honest I think the linked Spanish paper and the senationalist "Science proves modern music is bad" should be retracted since the methodology is flawed.

Don't take my word for it, take Tantacrul's - composer, video creator, Design Lead for MuseScore [0].

It's such a complex topic to study "music" - you have to narrow it down - genre, country of origin, purpose - it's a major simplification to say that "all music is worse now", there are a lot of types of music that didn't exist in 1961 (synths, electronic, hip-hop) - how can you compare it?

[0]: https://www.youtube.com/watch?v=VfNdps0daF8


>The articles you're linking are proven to be misrepresenting.

Citation needed. The articles are actual proof and measurements, whereas Tantacrul's take is just an opinion.

>They for example look at a really small subset of music "Million song dataset" and only analysed basic metrics that could be automatically measured.

The "small subset of music" is the most popular music of any year. That is what reveals music tastes over time, and this is the kind of music that permeates culture.

As for the "basic metrics" there's not basic as in trite/insignificant but basic as fundamental.

It's just that some people want so much to cling to "each period is the same, there are no ups and downs in cultural production" to not be seen as backwards oldsters, whereas milenia of history teaches that that there are absolutely ups and downs in cultural production (periods of stagnation, etc).

>it's a major simplification to say that "all music is worse now"

It's also a strawman. Nobody said that here.


Really love Tantacrul's video on this. I was actually about to look for it to reply to the grandparent post when I saw your post.


>(synths, electronic, hip-hop)

Synth began in the 70's, and Moroder/Jarre is far better than the 90% of today's crap.


If it's better than 90% of today's synth music, does that mean that 10% of today's music is better than Moroder/Jarre?

That's a lot!


No, just means it's equal but derivative...


The thing about music is that "more complex chords, more harmonies, more timbral variety, etc..." doesn't say anything about whether or not the music is better. It's a silly pursuit anyway given that music is a subjective experience.


That's neither here, nor there.

This is not about whether or not this or that particular song is better, is the top (most listened/streamed/talked about) music in general losing variety in all these aspects (harmony, melody, timbre, lyrics, genres, dynamics, etc).

That's not subjective, that's objective, and has been measured to get worse.

The "subjective experience" could be good, the same way people can prefer McDonalds over a wholesome meal by the best chef. Doesn't mean its also good by other metrics, and I for one don't believe individual taste is everything. There are people with shit taste, loads of them.


Again, “worse” is subjective.

Even saying someone has “shit taste” is subjective.


In the end, everything is subjective, even morality. There's no objective physical law that says killing is bad. Animals do it all the time and could not care less about it.

But to the degree that we have a culture, though, we also have a non-100%-exact but nonetheless existing hierarchy of artistic works. There might be disagreements, even strong ones, but there's also some general agreement, that not everything is a fuzzy blob of equal value, left for the individual taste to sort or not, and this just for itself.

Is the idea that the Beatles are better than The Monkeys or Oasis, that Aphex Twin is better than Skrillex, that Dua Lipa is better than Justin Bieber, that Michael Jackson is better than Milli Vanilly, in some non-measurable but tangible way really that difficult?

That this, once a common and well accepted idea (related to the idea of the "canon"), appears like beyond the pale for the 21st century solipsistic individual, where only the subjective taste matters, is not really the fault of the idea itself.


I wonder how deeply tinted your rose-colored glasses need to be to go to these lengths to prove that things were better back in the day.


Or how deep a fear of being called an "oldster" one has to have, to need to go to these other lengths to prove that things are always a fuzzy blog, and there are no periods (and historically even decades or centuries) with worse or better artistic output...


There's a simple metric for the state of modern music - where are the modern equivalents, in musical stature, of David Bowie, Pink Floyd, Roxy Music, Bob Dylan, Elton John, U2, Fleetwood Mac, Steely Dan & The Eagles? Answer - there isn't anything anywhere near the pantheon of talent that graced the late 60s to the mid-80s. It's just a phenomenon of history. It's not relativism due to ageing. It's a fact in the same sense that Shakespeare's work outlived that of his contemporaries.


I guess this depends on 'musically diverse' === 'better' though, which isn't necessarily the case when people think about what makes the top 100 tracks for a year.


The top 100 hasn't been a relevant indicator of modern music since Napster. The top 100 is a record industry owned entity with a tremendous amount of self-interest.

If you can't find more diverse modern music than the top 100 offers today you're not trying that hard.


That's neither here, nor there.

This is not about being able to find this or that niche musician that's great or even greater than any in the past, but about what the masses listen getting worse.

Music isn't just a solitary experience, but also a part of general culture.

As streamed music is getting cruder over time, the majority of the people are listening to increasingly shitty songs. That's chilling, regardless of whether someone can find 10000s of niche bands to listen themselves.


It's reductive to suggest that the top 100 represents "general culture" (whatever that ambiguous term means). There aren't solely niche musicians outside of the top 100. Theres plenty of musicians with millions of listens/views on streaming media that don't enter billboard lists that have extremely prodigious careers that have complex lyrics, use a wide assortment of instruments/equipments, evolve genres, etc.

Your data might be objective, but it's still a narrow slice of a much broader ecosystem.


I have a different opinion. When we moved from the old way to semantical HTML5 generated still server side but which the layout was defined by the finally mature CSS, with the dynamic parts handed by JavaScript and RPCs, everybody agreed it was better. Such web was faster, more compatible, simpler to parse, and so forth. I think that, if in the 20 years that followed, frontend development would progress in the same direction of improvements, everybody would agree again now.

And indeed everybody agrees that JavaScript itself is now better as it is the modern web APIs. If the rest is so controversial there must be a reason.


I’ve heard and read stories from seasoned engineers, here on HN and elsewhere, that say that FE dev was as prone to tire fires back then as it is now, so I don’t think that point is as incontrovertible as you claim it to be.

Some of those seasoned old-school engineers have, believe it or not, embraced modern FE frameworks and architectures and wouldn’t go back to the old thing if you paid them to do so. I’ve seen testimonials here on HN and surely it must be a factor in the world moving in this direction - it was not the juniors who made these decisions after all.


My point is different: I said that true evolution from tables to create the layout, continuos refreshes of the page to update the state, to a better development environment was universally recognized. And if it was again a case of good design, everybody would see it. If many voices are talking about over engineering now I bet there is a reason.


We had functional applications for forms and the like, in the 90s, running on machines with 8MB or 16MB of memory. They weren't as pretty, but they had simple development paradigms, VB and Delphi, easy to get started.

HTML has been the wrong place to make creating UIs (as opposed to marked up text) for such a long time. Things are getting better from multiple angles, but it's still very uneven.


It's always easy to solve a problem by taking away a number of it's requirements. VB was good at forms but visually unacceptable for branding. It was difficult and unsafe to deliver over the internet. And again, there were a LOT of really bad VB applications that hurt usability and performance.

If all you're doing is building forms for an application then it's acceptable, but that's an easy problem solved with elegant solutions today as well. HTML can immediately deliver the styled application without installation to every device regardless of OS. It can propagate changes without reinstallation. It is accessible for every kind of user. And it does this while looking miles better than any VB application.


>And it does this while looking miles better than any VB application.

Far less usable too.

> It was difficult and unsafe to deliver over the internet

No one cared, we had physical media.

> And again, there were a LOT of really bad VB applications that hurt usability and performance.

Most of them were fast enough.


OTOH, i think you can actually measure the quality and accessibility of tooling based on how abundant the long tail of mediocrity/low-quality output there is.

in music’s case, one of the replies to this comment describes decline in musical complexity/sophistication, which i’d personally attribute to democratization of the tools (which are also much more powerful, allowing kids w computers to do what took whole teams and studios full of equipment before).

so i think only seeing high quality UIs in the wild is more of a mixed bag than is intuitive to us — a world absent of shitty soundcloud rap is a world with worse music tooling.


I definitely agree with the music survivor bias thing. This is also very noticeable in the “computer generated imagery in movies looks artificial” meme: you just didn’t notice all the extremely effective and convincing CGI.

But I’m curious, what are the examples of great websites “back in the day” that have stood the test of time and would be considered good web development today?


Wikipedia and Zombo.com haven’t changed much.


You're on one right now. Although it'd benefit from some mobile compatibility...


Craigslist


> Everyone says music "used to be better" but that's just survival bias.

Not "just" survival bias.

I think it's quite reasonable to take a position that there was more innovation and creativity in pop music in 1950-2000.

As the genre has matured, popular/commercially successful music has depended more and more on fewer and fewer producers.

Indeed, there are fewer commercially successful artists: the US Billboard Hot 100 Top 20 this week contains 3 tracks by Justin Bieber, 2 by the Weeknd, and 2 by Drake. That would have been unheard of.


> it's quite reasonable to take a position that there was more innovation and creativity in pop music in 1950-2000.

A lot of people said the same in 1990, except their cutoff was 1980. All that synth is just rubbish, man, what about Led Zeppelin, that was new! Except they were just raiding blues and laying some electric guitar on top, of course, so really the cutoff is 1960; but to be honest them bluesmen were just recording stuff that had been sung in the cotton fields forever, so really the cutoff is the beginning of the slave trade; but really those melodies were just brought all the way from Africa, so really... etc etc etc


I think this essentially agrees with my point (that there was more creativity in the broad space of 'pop music' when pop music was a newer phenomenon).

I appreciate the reminder that a lot of pop music has a longer history as it developed from Black music styles. Thanks.


I think you've got that completely backwards. 2000 is about the time that the "long tail" exploded, where Youtube and Spotify made non-top 40 music a lot easier to find. In consequence there is a lot more really good music available today than there was pre-internet and it's a lot easier to find. It's not on the radio, but that's always been the case.


> In consequence there is a lot more really good music available today than there was pre-internet and it's a lot easier to find.

Tangentially, if (like me) you find this "long tail" interesting but don't know where to start, I've enjoyed going through Ted Gioia's "100 Best Albums of [previous year]" [1] each year. It's obviously subjective and subject to whatever biases he has, but the sheer diversity of the list is quite cool. Listening to a few albums per week is a pretty easy way to sample -- most of the albums are on YouTube or Bandcamp for free.

[1] http://tedgioia.com/bestalbumsof2020.html


Exactly. One only has to look at the myriad of tv shows, video games, movies, etc. to see that we have more quality options than ever and are seeing even more niche, but commercially successful content. There’s more static too, but there’s a lot of good out there.


Everybody thinks we are in a golden age of television series, don't they?

I don't claim that these examples have the same history as music.


I don't think so at all - there's a lot of pastiche, some of it loving, but there's only so many ways to do a pop song. It's alright for forms to die. The real "long tail" that the internet introduced is that we now had access to all of the good music we missed before the internet. There's no reason to bitch about modern music anymore because there's no reason to be exposed to it other than aggressive fandoms and marketing.

I honestly don't even understand the impulse to listen to new music. The quality of "newness" is not a desirable one for me in music; I prefer qualities relating to sound. To me it seems like a distorted version of the impulse to be cool in high school.

The real problem is that commercial, recorded music crowds out local, live music. For local music I understand wanting to know what's new - you may miss out because bands aren't live forever.


And before, soulseek.


> Indeed, there are fewer commercially successful artists

Source?

I reckon every decade had a handful of artists dominate the charts, so it wasn’t unheard of at all to see multiple chart toppers by the same artists at any given time. In fact, given the gatekeepers of popular music largely were concentrated around radio DJs and the “industry machine” of record labels, I’d hypothesize there’s a larger on average number of unique artists in the top 100 since the 2000s than before.


>Source?

https://www.billboard.com/charts/hot-100

You can change the week yourself.

If you pick a few random weeks before 2000, you likely won't find this pattern that I identified, with the same artist in the Top 20 multiple times.

If that's not enough evidence then there is a Python web scraper: https://github.com/jocelyn-ong/data-science-projects/blob/ma...


Good, and commercially successful, don't always go together.

Amazon is commercially successful but few would argue their UX is good.


Do we know what the UX goals are for the site or what requirements they had? It's really difficult to gauge these things if we don't even know what they're going for.

Personally, there are some things I have found difficult to do (getting a chat going with a customer service agent) and some things that are very easy (checkout process, making a return). It's no surprise to me that they would make it slightly more difficult to find a live person to chat with since they probably want me to exhaust the other automated methods of solving my problem before doing that. Is that bad UX? I don't think so.


Amazon's user experience is fantastic. Why else do you think they are so commercially successful?


Amazon pages are a cluttered mess, but they are a -familiar- cluttered mess because everyone has used Amazon for so long. Amazon succeeds in spite of its design, because everyone has become familiar with how it works. This approach would not work outside of Amazon, which Nielsen Norman discussed some 15 years ago (https://www.nngroup.com/articles/amazon-no-e-commerce-role-m...)


I wonder how this might read if written now, rather than 16 years ago.

NNG often say to follow conventions and Amazon has been leader for so long now that I can't help but think they have established many conventions now that might have been viewed negatively 16 years ago.


> Amazon pages are a cluttered mess

Ahh, you want "clean" design where maybe there is just one box with beautiful font and not all these options "cluttering" things. You want those options buried in an option tree maybe 8 levels deep.

Well, I'm pretty sick of this minimalist design philosophy that requires 10x more clicks to accomplish anything because a designer decided the original was just too easy to use and wasn't "engaging enough" to generate enough clicks. They have made the web a much worse place for getting stuff done online, and it's no wonder that Amazon, the home of one-click purchasing, has continued optimizing for having the UI get out of your way and accomplish the most with the least effort rather than building attention-seeking UI "experiences" that require lots of interactivity to get simple things done.


The big one that always gets me is sort by price doesn't actually sort by price. Or at least, not the price I care about (which is price for condition new, seller Amazon).


Consumer logistics - which rely on not paying drivers and warehouse people much, and forcing them to work insane hours.

And AWS.

The web store UX has become appallingly bad - unreliable/fake reviews, tens/hundreds of sellers all roboselling the same identical items from China, poor quality items, unreliable/poor product search ("10TB external hard drive" shows... not many 10TB external hard drives). And so on.


If ECMAScript, HTML, and the DOM didn't exist and you were asked to create a specification for applications where the client UI is remote, possibly very resource constrained with a connection to the back end that may be slow and only mostly reliable, what would you invent? Is there a better model already out there that isn't used because Javascript + HTML has sucked all of the oxygen out of the room?


Even Flex and ActionScript were better in many ways than the DOM.

My big problem with web application (in a true application sense - not things that are just hypertext and shouldn't be applications at all!) development is that the DOM+CSS model is not made for rich UI experiences. Basic paradigms from desktop applications like spreadsheet cell selection, draggable cards (think Vizio/UML), modals, and MDI / multi-document interfaces are non-standard and brutally challenging to construct in a reasonable way using the DOM.

What I'd invent would pretty much be Silverlight without the Microsoft, honestly - a typed UI framework built on a widget and widget-binding model which would allow a smooth mixture between OS-level widgets (and the accessibility affordances they provide) and hand-controlled rendering/drawing, with a stripped-down runtime enabling resource constrained clients to execute client-side code which would hide / paper over the resource constrained backend connection.

Anyway, I also think this is orthogonal to the argument in this thread, because I think that most of the conversation and the sentiment of the original tweet is to call out applications that SHOULD be hypertext, not applications. For applications that need to be applications, I think things have gotten better, not worse, although they're still pretty bad.


As a language, ActionScript 3 was actually pretty good.


The reason it's so hard to come up with a feasible alternative at this point is that we have done ~15 years of browser wars with perhaps an average of 3k "core" contributors (browser company employees etc) since "we" moved into this direction... pretty much by chance.

My understanding from working at Opera at the time (2004 and onwards) is that the "senior" (experienced) people were busy implementing stuff in the browser engines and various GUI platforms. We hired very young (often like 17-18) and very smart people who had experience actually writing HTML/CSS/JS to work on developing web standards. They naturally typically had very little commercial software development experience.

After a while it kinda became a competition - which browser company's web standards people would be leading in terms of ideas/innovations. How many web APIs could browser company A do, vs browser company B. That's when the complexity really started accelerating. Then Safari and Chrome happened.

I wish we had spent more time working with these web standards people (we had so much experience building GUIs, for instance). They were really friendly and approachable, but we were all so busy with actually building the browsers, during these browser war times. It feels like a missed opportunity, in retrospect.


Java Web Start was a thing, and had some nice aspects to it. JVM startup time has been a problem, and the UI toolkits maybe had some issues? And of course the early JVMs were notoriously full of security holes, which is something browsers somehow managed to avoid.

But nevertheless I think Java/JAWS got basic stuff still right: run applications directly from network (with auto-updates), have security controlled sandbox, be fully cross-platform and portable, and have useful stuff built-in, and even had the PWA-like thing that you could have desktop shortcuts to JAWS applications.

Regarding startup time, I do wonder what would be the startup time of your typical Electron app on a late 90s-early 00s PC hardware. Somehow I'm imaging JVM might not be that bad in comparison...


As far as I know, security issues were mostly due to security not being too important at the time, but could have been trivially solved. Startup time is similarly a question of what should be optimized for.

I would have liked to see a web built on Java tech.


Depends on use case, but for general purpose apps MS XAML platforms are IMO better. They were designed for rich GUI from the start and it shows.

The main downside, they only work on Windows, MS once had Silverlight for Mac but it wasn't particularly good.


X Windows was an option back in the 90's, and with frameworks like Motif++ it was kind of ok.


Does X scale? Can you have thousands or even millions of simultaneous clients hitting a server?


So, X flips the nouns, and the server is the part where the user sits, and the client is the part where the data processing happens.

It used to be possible (and done) to have a thin client X server that would have the whole desktop running elsewhere. You could run a computer lab with a bunch of thin clients and a handful of desktop servers for lack of a better word. That was probably hundreds, maybe thousands.

Since then, though, X moved a lot more work from the Xserver to the clients; text is almost exclusively rendered on the client and passed as an image to the server to display.

If you did a classical experience, it might work ok to millions, but the user experience would have to be pretty meager. There's also the issue that there's not really a good way to sandbox different X clients; so nobody in their right mind would let a random internet server connect to their Xserver. Connectivity issues abound as well.


> Others say that the web is slow because of ADs

This is pretty much objective truth in my mind. I might agree with people who hate the web if I always had to browse with ads enabled.


My worst experiences in the web come from mobile media (news) sites where I can’t block ads. They are absolute dumpster fires.

Everything else, including the aforementioned sites when ads are blocked, is pretty decent in my experience.


I think this is more about the limitations of our perceptions given we live such short focused lives.


I recently lost my internet provider, so while I wait for a new one I'm tethering from my phone. The Verizon unlimited data plan actually throttles me to modem speed, about 56k, so it's actually not unlimited. Anyway, it's been interesting to judge various websites by how fast they load at that speed. Hacker News comes right up. Twitter is acceptable. Facebook is terrible and often does not even load at all. The same for any site that uses React. I'm not sure why Facebook uses such a bloated system when they are trying to expand their user base into much of the world that does not have high speed internet.


You can bypass the speed restrictions by decreasing the TTL on your machine by 1 [0], I tested it and it worked perfectly.

[0] https://android.stackexchange.com/questions/226580/how-is-ve...


The answer you linked talks about _increasing_ the TTL on your machine.


Hah! This is awesome, I'll give it a try. Thanks.


I travel a lot and have gotten quite used to browsing on airplane WiFi, so a similar low-bandwidth experience (at times).

I'll add a huge culprit to the list: Medium. They have their own "clever" code to progressively load images and I find it absurdly frustrating (because in most scenarios, the images just don't ever load), so I end up with lots of design articles with blurry blobs of color instead of images.

There are so many ways to natively progressively load images that I'm not sure why they've chosen the most user-hostile one. You see blurry blobs of color in no particular order, no indication of which ones are loading, no way to force a particular image, etc. I find myself frustrated often and I end up abandoning most of the stories (or avoiding Medium altogether).


Isn’t progressive loading actually built-in to the JPEG standard? Like you get it for free if you encode it for progressive decode. Yet another “lets use JavaScript” waste of time. Developers gotta develop tho.


This came up with my remote team recently.

A coworker and myself had the worst internet speeds in the company, but he recently got FTTH.

I went to replicate a bug, by clicking on a button quickly and excessively, and was able to add 5 duplicate entries into the DB.

The frontend dev could not replicate it until I suggested using the Chrome dev tools to simulate a slower connection.

I have Frontier DSL.


I remember tracking down corrupt entries in our DB. It was mostly one user introducing the inconsistencies. Turns out he would double-click on every button, and the browser would happily send-abort-send two requests every time. Sometimes these would race on the server.

We implemented disable-on-submit after that, and the inconsistencies went away. Other people would click again when the response didn't come fast enough, but that was rare to lead to corruption. Probably when their connection was lagging, they would click multiple times in frustration. But that one guy provoked enough destruction to make us notice and fix it for everybody!


I believe Twitter uses React? Although it’s possible that Twitter successfully served you Twitter-Lite while FB may have mistakenly sent the full version.


I just checked. You're right. So maybe it isn't React but something else Facebook is doing.


How about editing your above comment that was badmouthing react then?


Because I don't know it's not React. Twitter could be using a stripped down minimal version of React with limited functionality. Standard React could still be a bloated mess.


> Standard React could still be a bloated mess.

React is ~6KB (even less g-zipped). React is very performant and its performance was one of it's original claims to fame over previous frontend frameworks like Angular.


React never claimed performance, and the virtual DOM is quite wasteful. Any direct manipulation library will perform better.

Plus, it’s dishonest to look at react core size. react-dom, which is absolutely required, is 40KB minified AND compressed. Plus the usual plugins to handle icons, SVG, animations, and on on. A base React application easily crosses the 200KB gzipped mark.


> React never claimed performance, and the virtual DOM is quite wasteful. Any direct manipulation library will perform better.

React's performance (in large part because of it's virtual DOM) over Angular was a common benefit cited in it's rise to fame. Indeed no one ever claimed React was somehow faster than the document API (or even JQuery) and that isn't what I'm saying either. Whether or not you think the virtual DOM is wasteful in 2021 is a separate opinion and even Svelte devs admit the virtual DOM approach is usually fast enough.

> A base React application easily crosses the 200KB gzipped mark.

I didn't bootstrap a CRA app to check its full size, but considering a hero image alone can easily be larger than this, it still stands that React is certainly not "bloated".


The mating call of all react devs - “but what about the images”. Who said a 100KB hero image was a good idea either? It doesn’t excuse dropping a 1MB (when executed) bomb of JavaScript to deliver a blog post and making mobile and low speed users suffer.

It’s also established that byte per byte, JS is way more costly to download, parse and execute than images, the comparison itself is dumb.


Being faster and smaller than Angular isn't much of a feat...


As I said, it's not the size of the React library that's the problem: https://news.ycombinator.com/item?id=26691150


There is no additional overhead in nesting React components vs normal HTML elements. React-rendered and normally rendered HTML are identical.


Dan Ambramov has stated the reason hooks was created was to avoid nesting hell that was used to manage state. I may be misunderstanding him. I am not s React expert.


React hooks are just Javascript functions. The layout of your application is unaffected by the shape of your application state. Instead the shape of your application's state is typically governed by its layout.


I think Twitter does some aggressive PWA-style caching


Funny, I just went through a similar experience and had to tether off a Verizon connection for almost two weeks.

I certainly felt the pain of a slow connection too and felt frustrated at how badly this affected the experience on so many sites.

Here’s an idea: web developers should test their sites on the slowest connection speed still commonly available (ie 3G) and make sure the experience is still acceptable. I know that webpagetest [1] allows you to do this and the results are illuminating.

[1]: https://www.webpagetest.org/


No idea if they're still doing this, but Facebook has done "2G Tuesdays" for just that reason

https://engineering.fb.com/2015/10/27/networking-traffic/bui...


Considering that facebook managed to use up 800 MB of ram, and it requires a good 2-3 secs minimum to open a chat head, I honestly have a hard time believing they test by humans that shit of a UI in any shape or form, let alone in multiple ways (or care about the test results)


I’m living overseas and this happens with my phone from the US. It’s unbelievable how many apps time out. Why an app has implemented a timeout is instead of relying on the network stack is beyond me.


React is not inherently slow. The sites you use are slow because they're bad sites, not because they use React.


Easier to start an ISP and give away free internet than to rewrite Facebook.com to be more performant


If not that, maybe try PdaNet. It is $10-15 for a license and it can mask that you're tethering through a tunnel from desktop->phone, so everything gets counted as mobile data instead of hotspot data.


Search for "Facebook Lite", it's their lightweight frontend for lower-bandwidth connections.


It appears to be an app for a phone? I have no throttling problem on the phone in case that wasn't clear, but I don't care to look at the small screen and want to use my desktop.


I think they mean https://m.facebook.com - a lightweight web version.


There's also https://mbasic.facebook.com, which is even more lightweight and still allows you to send and receive messages.


I tried it. It's very ugly and featureless on desktop but it does load fast.


It actually used to have -more- features. I use it on my phone exclusively; once upon a time I could use messenger from within it. They removed that feature and now try to foist the Messenger app on me. So I don't use FB Messenger from my phone.


After minification and compression, React is about 100kb and can be cached for a long time.

With all the images, videos on social media etc. it seems rather small in terms of bandwidth.


I'm not counting images and videos because all sites have that problem, but things like clicking on the notifications icon and then waiting for it too load and it never does so I have to give up and go back to my phone.


I thought it was actually around 30kb, after being minified and gzipped?


It's not the size of the React library that is the problem, it's the nesting of components with components within components that is the problem. If you've ever shown source and seen divs 20 to 30 deep, you're most likely looking at html that React produces.


A React component has no obligation to add a div or any other element to the DOM tree.


Dan Ambramov has stated the reason hooks was created was to avoid nesting hell that was used to manage state. I may be misunderstanding him. I am not s React expert.


I believe "nesting hell" was in reference to obtuse state management architecture rather than DOM structure. It seems like a common misconception based off other comments on this post. Angular inserts a new custom element for every component so it's definitely a problem elsewhere, if not with React.


React needs to borrow the idea of 'zero-cost abstractions' from Rust. Too often I've seen five divs used to add five layers of behavior where one (or zero) divs could have had the same effect.


With react dom and everything?


This topic seems to come up on Hacker News every few months or so. Not saying the post is necessarily wrong there, but it's certainly something people here love discussing nonetheless.

As for why front end development may be a bit of a mess here? Well, it's really a problem that doesn't have just one cause.

On the one hand, business pressures likely have a huge impact here. Companies love analytics, tracking, ads, etc, all of which contribute greatly to the issue of slow loading, broken sites. If you include everything and the kitchen sink, then things will clash or time out or break.

There's also an issue with businesses focusing on speed and working getting done quickly rather than well. If you're given unrealistic deadlines, and the feature set is changed midway through development because some manager/sales team thought of a shiny new idea that needs implementing... well it seems like optimisation, usability, etc often end up on the cutting room floor.

And yes there's a bit of envy from developers towards Facebook/Google/Apple/whatever there too, and a need to 'test out the shiny new toys' so they can potentially get hired there in future. I suspect CV padding is definitely why SPAs are way more common than they used to be, and why simple static or server rendered sites seem nearly nonexistent now.


I am not an expert chef. I can cook pretty well, and my guests enjoy my food. But it doesn’t compare to what a professional chef can do.

I am not an expert abdominal surgeon. If we are in Antarctica and your appendix becomes inflamed, I will try to save you by cutting it out. But you will probably die.

I am not an expert front-end developer. I make sites for myself, for others, and have even been paid for it a few times. But I know I am basically an amateur. However, my sites are far, far better than almost all sites in the wild, created by teams of full-time, professional front-end developers. They work identically in all browsers, are fast, easy to navigate, and people say they look good. They validate, too, except for a few nonstandard htmx attributes.

There is something very strange in the WWW, when a self-taught amateur like me can make a site that is better than the New York Times’.


Your sites are better at being tools ordinary human beings can use to their own benefit. They are (probably) terrible at generating money via ad revenue and tricking people into buying subscriptions that are then nearly impossible to cancel.

Making a site to share info and have fun is easy. Nearly anyone can do it. Making a site to actively exploit people in the most intense way possible while still being legal(ish) takes highly-trained experts.


Exactly. But I am using the measure of quality that I think is relevant. We call a chef good when we like the food, not when he finds a way to enrich a fast food chain.


Do your sites include interactive graphics and storytelling like the New York Times does? By what measure of "quality" are you setting yourself as "better" than the New York Times team?


I was circumspect in what I said and did not say. Rich Harris, the creator of Svelte, works for the Times. I use Svelte frequently, but would not have been able to create it. I don’t have his skills. And the interactive graphics that you are talking about are awesome. I don’t know how to do that, nor do I even understand how they are done some of the time. I was talking about the experience of navigating the front page and reading the articles, which is pretty bad. I am a better designer than whoever designs these pages. But I am not more skilled.


There is something very apt in your first line: A lot of sites suffer from too many cooks in the kitchen.

If every department proves its worth by claiming space on the front page, customer experience will be the last concern. That's before someone notices how giving content away is not a monetization strategy. Then add the siren song of 'only one more script' for marketing data,even if nobody has a clue about what to do with that data.


It's like Million Dollar Homepage in a sense. Every stakeholder in the business wants their stuff in there somehow. Except we're gravitating toward the million Kilabyte Homepage.


not saying that you’re wrong, but do you have the same constraints as the “crappy” websites?

there is definitely an explosion of tooling, frameworks, etc, but past some obvious things, IMHO most websites are crippled by business people (that will likely not be using the website) make all sorts of technical decisions what must go in and how it’s supposed to work (ads and tracking crap is one of the things that jumps up).


No, I don’t have all the same constraints. For example, I would walk away if any of the organizations that I work with insisted on tracking visitors. I stopped taking paying customers years ago for this and similar reasons. But if a site is bad for the reader, it’s bad. The developer has failed, even if the customer got what it wanted.


i get it and i am in the same boat as you (ie make it awesome for the reader), but unfortunately this is not what happens in the real world :(


The measure of what is better in all these companies is determined internally and generally by the business. So it ultimately serves the business and only serves the user indirectly—-if at all. Obviously you optimize for different things—-performance, reliability, simplicity perhaps. And the business has more stakeholders who want different things, more metrics, more integrations with their third-party tools, etc. That’s not to say your measure of better is wrong. It’s probably not and I probably agree with you! But it’s coming from a different place I think.


I appreciate your comment. But isn’t my conception of quality the only one that matters? If a doctor saves money for her employer by skipping some expensive test, and my health outcome is worse, her employer may be happy, but we don’t say that she is a good doctor.


here is proper doctor comparison: imagine you are a plastic surgeon and patients comes in with stack of cash and demands you make a surgery that is not medically necessary and is high risk for him - and you do it anyways and then the patient leaves happy. if you refuse, the patient would just go to the doctor next door and pay him.


This is clearly a terrible doctor. Am I wrong? Isn’t this doctor violating the Hippocratic oath?


plastic surgeons usually perform procedures that are not medically necessary, but only improve appearance to boost patient's self-esteem (boob job, facelift, butt job, etc)


You moved the goalpost. You stipulated high risk in the comment I was replying to. And who cares what plastic surgeons “usually” do? A doctor who gives boob jobs to women who are already “normal” (not to reconstruct after a mastectomy, for example) is a scourge who is exploiting society’s bad attitudes and exploiting his patients. Not a good doctor. Or are you suggesting that if a lot of people do it, that makes it good?


> I make sites for myself, for others, and have even been paid for it a few times.

Here you go. You make sites for yourself. My own stuff is fucking fast as well.

It gets slow when you have to ward of a thousand idiotic requests and implement maybe 10 of those to shut people up.

While in my own world I’d leave it as it is.

Shitty product is primarily the result of shitty culture. It’s just that FE is visible and atrocious DB calls are not.


It's kind of like saying that you as an amateur chef can out-cook the line cook at a cafeteria who prepares a thousand meals because your best-prepared meal is better than their offering. That actually may be true for that specific case. And sure, your site works great when a couple of people look at it. But what happens when you direct the entirety of the NYT's traffic to it to see how it does?


No, not at all. It is more like saying that I make better food than almost all restaurants in town. Which of course is not true, and that’s the point. Because I do make better websites than almost any I come across in the wild.

Your question about handling traffic is orthogonal to the topic of design. The answer is that any of my sites would do better, given the same server architecture, because I deliberately limit the amount that needs to be transferred for any particular page.


I think their analogy may still hold.

My mom claims her cooking is better than the restaurants (arguable) but she's not operating under the constraints of cooking at the variety, consistency and speed that my local diner does. The diner needs to be able to provide hundreds of dishes on short order including on days when the main chef is out. So maybe my mom's once-in-a-while pot of chilly is great but she couldn't run scale to run a profitable diner.

Similarly, sounds like you are hand crafting awesome websites on your own whim and schedule. That's awesome - me too - and I've done amazing handcrafted HTML/js/css for fun.

And then I go to work and manage an organization whose front-end experiences are decidedly not hand crafted and I feel great about that. Our product helps people navigate one of the most important decisions of their life and and sacrificing our ability to iterate quickly at the expense of hand crafted front-end code would be the wrong call. Like my mom's chilly it wouldn't scale.

You'd look at my work product and say "ugh what a badly crafted website" and I'll take the criticism but I wouldn't change it.


I’m not talking about sites that aren’t beautiful. I’m talking about sites that are user-hostile, annoying, and a pain to navigate. And that seems to have become most of them.


Thanks, this was exactly the thrust of my analogy. I appreciate it when it lands for at least one person (though perhaps I could be a bit more precise in my writing)


You're welcome! I rely on analogies a lot and I find them helpful. Turns out many others think differently and don't find analogies an inherently helpful way to think. They get confused and get caught in details ("but chilly is spicy and that's good, but you don't want spicy code so... what?") It's just cool to see how many thinking models there are out there.


Why do you think your mom's chili wouldn't scale? Do you have any material reason to think that?


I think non scalability is the default assumption. Why do you think it WOULD?

If I stumble into the diner at 3am drink and want a bowl of chilly, guarantee positive outcome. At my mom's, not so.


I’m an amateur builder but the work I’ve done on my house is better than most of the work I’ve seen done by various trades on my friend’s/family’s places.

Quality you get from strangers correlates with how easily the average customer can judge the work (food is easy to judge). Sometimes things are important enough to be regulated (healthcare) otherwise most markets are for lemons.


Interesting point. But as far as websites go, we all suffer from their horribleness, yet they keep getting worse.


> There is something very strange in the WWW, when a self-taught amateur like me can make a site that is better than the New York Times

Would love to see something that does everything the NYTimes does that is clearly and obviously "better."


Easy: take the front page and remove every headline and introductory paragraph that, for some reason that boggles my mind, is repeated, sometimes more than once, sometimes more than twice, on various areas of the enormous page. Now you have a page with the same information that is lighter and easier to find things on. And less stupid.

Another example: sometimes there is a stock ticker near the top of the front page, and the number of digits it displays changes as the ticks go by. But they did it wrong, and when this happens the entire layout jumps. I learned how not to make this kind of mistake near the beginning of my self-education in amateur web design.


I think the core mismatch here is that you are optimizing for experience and NYT, as a business, is optimizing for revenue. Those two things often do not overlap.

I can virtually guarantee you that NYT has spent thousands upon thousands of hours optimizing and multi-variant testing their headline display to maximize conversion. In other words, if they followed your advice, they would be leaving money on the table.

As for your stock ticker display bug, keep in mind that NYT likely employs hundreds of developers and dozens of teams each owning various pieces of the website. Bugs will make it to production, so it's a matter of prioritization. I'd be shocked if they weren't already aware of that bug. It's probably lower on some specific team's backlog than a bunch of stuff that will be more valuable for the company in terms of revenue.


You are making my point for me: “you are optimizing for experience and NYT, as a business, is optimizing for revenue.” That is simply an attempt to explain away the bad experience of reading the Times. The explanation is probably correct. But by offering explanations for why the site is bad, you are implicitly agreeing that it is bad.


I wouldn't call it "bad". I'd say it's not perfect, and rightly so, because perfection would be lower ROI than some other things their developers could be doing.


Neither your cookings or your websites have no pressure from business side, right?


Yeah it's strange. For example, Facebook has billions of dollars, thousands of developers, and they themselves created the front end framework that they are using - yet Facebook's front end is slow and glitchy.


People trot this one out every now and again and what they mean is “front end used to be the easy part of the stack”. If you lived in the bad old days of jquery spaghetti and in-line php templates you’d realize what a bad take this is. Yes it used to be that anyone could jump into the front end. And the front end _sucked_ because _anyone could jump into the front end_ and so they did. There was no organization. No architecture. Nothing. Just a bunch of files that got harder and harder to maintain as the app increased in functionality and scope.

Modern FE is a discipline every bit as complex as anything that we face in the backend. It requires that we actually apply an architecture. Design our code so that we can respond to changes in our business requirements etc. Serious engineering rigor in other words. Hell I could make the argument that in some ways backend dev is much more straightforward.

I think it’s fair to say that we often put too much business logic into the front end. Fat clients are not something I agree with. But to say things used to be better is just flat out incorrect.


> I think it’s fair to say that we often put too much business logic into the front end.

As a back-end dev, this reminds me of the time I used our own product and saw a read-only field on the UI. It was some interesting bit of data that only existed in the database (we didn’t expose it in the API) and I brought it up how cool it was that we were doing that now. The front end dev said, “oh, it’s not from the database, we make several API calls to get the bits we need and then calculate it the same way we do on the backend”

I facepalmed. Like, just ask to expose that data, it’s a single line of code! Instead we made 7 API calls... :sigh:


Man I’d love it if our backend teams were that willing to make changes for us. So often I stumble across bizarre, complicated business logic in our various client code bases and ask the devs “why is this here? surely this is better put on the backend/already exists on the backed” and am greeted with “we agree, but when we asked the backend teams to make a change on their end they told us it would be done in the next year or two”.


That’s aggravating. I do full stack dev and work at startups so that particular scenario hasn’t come up but I could see how that disconnect could create some tech debt. That said, I think it’s a planning process problem. You really should have someone on your project capable of making those changes. People really are the hardest problem.


Can you not propose a PR to the backend code?


> I facepalmed. Like, just ask to expose that data, it’s a single line of code! Instead we made 7 API calls... :sigh:

The reality here is that this person who worked in your company, along with everyone else who saw that code and deployed it, all thought it was easier to do what they did then ask you to expose that data with one line of code.


Yeah I generally think the UI should only manage stuff like that cosmetically. Stuff like form validation etc. The real work should happen on the backend and not care about UI stuff. So it just throws if the user tries to get cute. IMO that is a nice separation of concerns and keeps the client thin and presentational in nature.


HN seems to be pretty split on this issue. I concur with the linked tweet. Using the web beyond very simple pages like HN feels like wading through garbage.

Everything is slow, laggy, loads ridiculous amounts of stuff (I live in a country where traffic is very expensive at the moment, so this becomes more noticeable). Things are also flaky and need reloading whenever they get into broken states. The only way I use the web now is with uMatrix blocking everything, and then whitelisting stuff that pages want to run piecemeal. It's still terrible.

The larger a company is, the less usable its web stuff is also. Google Chat for example is unusable in very active rooms.

I hope that frameworks like Hotwire make server-side rendering "cool" again and that we can get out of this tar pit.


The reason I’m interested in Hotwire is because the Javascript frameworks are confusing. Every new Javascript framework I’ve looked into has been impossible to figure out. There are simply to many moving parts for me to be able to figure out where to start.

It may be a bad example, but it the latest I’ve attempted to use. It took me maybe an hour to get an Angular project started. It somehow managed to install 1000+ dependecies (a few of which seems deprecated) and I have no idea what I’m suppose to do next.

You get the feeling that frontend development is a stack of tools three levels deep and you’re not expected to understand how or why. It feels unstable. At this point I just avoid anything that requires npm.

That being said, I do see very nice project built using these tools.


It’s happening for a very particular reason imho. Frameworks like React create infinite ways one can structure and compose components. What once used to be a <Select> can now become

<DropDown>

And or

<EnhancedDropdown disableEnhance={isEnhanced}>

And or

<MultiSelectWithAutocompleteAndSearchBar onlySingleSelectionAllowed={true}>.

Then you can compose all of those together into:

<ThisComponentMakesSenseToOnlyMeDropDown show={isDropdown && !Carousel && showCarousel} totallyDifferentData={totallyDifferentData} enhanceWith={<EnhancedDropdown>} replaceWith={<CarouselWithNoDropDown> >

^ That’s where all the complexity is coming from. I will not even attempt to demonstrate my point by adding context and global stores into this.

Small bit of bitterness:

Then you make a nice little storybook component, and a jest snapshot to show that this is a nice ‘testable’ component, you know, like really dot your I’s and cross your T’s.

Back to my point:

This is powerful in the purest sense, as it’s super flexible, but also powerful in a way that can create insane amounts of complexity with different mindsets contributing. Not everyone sees a regular <DropDown>, some see all kinds of things.

I think a lot of the fragility (perhaps the better word is instability) comes from this power (chaotic, out of control power). I hope web components at the very least creates a standard list of UI components (the browser doesn’t even have a default modal yet, still rolling with alert(), and if you leave it to the wider community to make it, we will end up with <AbstractModal>).


My honest take is that the frontend development stack focuses almost entirely on the developer experience (mostly oriented towards shiny things), and the user experience is only a secondary effect of that.

That the developer experience also doesn't work is just an effect of the real-world, where people will end up using stuff outside of the small designated bucket of things that somebody attempts to keep compatible with each other.


I built a side project recently using Django/Hotwire. There's some JS, sure, but it's used where appropriate (media API stuff basically). Lighthouse gives 100% accessibility and best practices scores and performance comes in at over 90% on a good day, with performance issues mostly fixed with some database indexing and caching here and there (it runs on a single shoestring Digital Ocean droplet, so it's never going to be super fast or scalable without a bigger budget, but for the small traffic it gets it's fine). I feel I can reason about how it all works in my head, and fix bugs and add features quite easily. It was fun to build, and I was able to focus on interesting problems.

At the back of my mind is the feeling that somehow I'm doing it all wrong, and it should use a proper JS frontend framework like React or Vue that communicates with the backend with a proper REST API or better yet, GraphQL. I realize it's probably not the kind of project I should use to show off on my resume and that many will just consider it old school. At the same time though it does feel that maybe the industry took a wrong turn when it went all-in on SPAs.


I was thinking about starting Django + Hotwire project myself but I'd like to see some examples first. Is your side project open sourced?



And a cookiecutter for Django + Turbo, very cool. Thanks!


To be honest, I think it's browser-specific. I use Brave, and everything is snappy. Occasionally, I use someone else's computer w/ stock Chrome or stock Safari, and it's a total shitshow. Decent ad / tracker blocking makes a massive difference. I don't think it's frameworks as much as it's all of the other analytics and bloat.


Nope, that's not it - I block all of that.


Why would you want server side rendering? As a user it's painful if you have to wait after each click


Your comment made me legit chuckle. Rather than "waiting" after each click, we get an instant 'page load', half of which can't be interacted with (looking at you, Amazon.com), and then a sea of loading spinners -- all under the guise of 'not waiting'. I'm not sure which of those I prefer, tbh.

I'm partial to good ol' fashion SSR sites these days, as I do most of my casual couch browsing on an old chromebook running Ubuntu. The number of heavy-weight SPAs that I just can't run on the hardware anymore seemingly climbs by the day. :(


There is - by definition - more overhead in client-side rendering. Think of it this way: The server needs to first serialise data, the client needs to deserialise it, then it needs to construct the DOM to update.

The server can just do the first step and things like Hotwire can add the dynamic bits you need - all overhead and extra processing is now gone.

I can list a lot of websites that don't render client-side that are fast, but very few that do which are.


is it painful for you to click on HN? I find this website rapid fast. Also because you dont have long running JS processes the RAM consumption is low and everything feels very quick, unlike most SPA garbage


Yeah, I kinda wish clicks would respond instantly


Not so fast. Front end development is in fantastic shape for an industry that’s 25 years old.

In a quarter century, we’ve participated in reshaping how billions of people spend their free time and get their work done, seen the rise of social giants, endured the creation of millions of mediocre small business sites, learned to build apps, and taught millions to code in the process.

Now, some of our current development paradigms are either underdeveloped or outright broken. News websites are a cultural disaster. Web app development changes weekly.

This is the natural result of many people trying many things. But the web mostly works most of the time. Eventually we will have fully baked standards with supervisory committees and enforcement and professional certifications and insurance and all the other trappings of a fully grown industry. But right now we get to enjoy the growth of one of the most exciting industries in history, despite it’s problems.

Front end development isn’t broken, it’s teething.


It's an industry with established UX guidelines defined 20 years ago which are mostly ignored today. Slow, inconsistent and confusing interfaces seem to be the norm, not the exception.

There were some revolutions, like responsive design, but all the rest feels very shallow compared to the bad UX experienced in so many applications and websites.


I don't find the argument convincing. I've used the web all day for decades, it's fine. Some sites are slow, some sites are buggy, some sites don't meet contemporary design standards... this is nothing new, it's the same as it ever was.

The one quality that truly differentiates the new web from the old web is the Cambrian explosion of ad tech, but this isn't a "front-end" problem, it's a cultural problem rooted in the shifting expectation that everything on the web should be monetized. There isn't really a solution to this, ads are the opposite side of the coin for people producing goods and services that are ostensibly providing value to the economy in the abstract sense... this is the web we all wanted.


There are good examples of ad integration and bad examples. People spend a whole lot more time talking about app architecture versus how to elegantly load, render/position and refresh ads.


Agreed, but the people complaining don't really register the "good" examples, understandably.


Hacker News and the old reddit is the pinnacle of user experience for a site of its "genre". The new reddit perfectly exemplifies what @antirez is saying. It's amazing to me that hundreds (thousands?) of people who work at Reddit thought that that user experience was an "improvement" over the old reddit.


It really amazes me how bad new Reddit is. I have a Windows computer from about 5 years ago that I sometimes use. It takes about 10 seconds after clicking on a story for the first comments to load, and there is about a 5-second delay between pressing a key and the character showing up in any of the text input forms.

Even on my new M1 MacBook Air it takes 2/3 seconds for every page to load.


If it forces us to download a native app because the website is unusable, that's a "win" in their eyes.


And yet, “performance improvements” are often the excuse for making every single site and web app look the exact same. Dull, flat, ugly, homogeneous design has sucked all the life out of the web. Remember flash? Remember when websites were cool? Where it was fun and exciting to go to certain sites simply because of how they were designed? Remember the early days of CSS 3 experimentation?

But but but! Skeuomorphism is slow! We need fast sites. We need accessibility! We need to improve our performance!

Now look. The web is both ugly AND slow.

The big question is why? To me, it’s all about commoditization and making money.

Why spend money on expensive artists, designers and engineers to make sites look good If we can band together (big tech) and make flat design the norm? Everyone will forget that the web didn’t used to suck as bad. Instead, we’ll absolutely decimate performance with loads and loads of ads and nonsense JavaScript. We’ll hire product managers to track every single interaction imaginable so user’s experience sucks and we can invade their privacy!

Basically big tech ruined the web. They ruined design and everyone pretends like this never happened.


I concur with most of the sentiment here- and am happy to push much blame to big google, pointing out that if you are not in google you basically don't exist on the web - (and for many on the planet if you are not in fbook than you are not on the internet) - when google says google web vitals, speed all that ranks you in the top or you are not in the top and you become invisible - it's a thing.

However I think the global move away from desktop computing to phones, ipads and cheap chromebooks are the real fun killer for great art and design.

I used to love making sites pixel perfect at 1024 - and today if we spend a lot of time making fancy desgins - most people will never see it, since they are checking it out via phone surfing.

This is what I point to for people inquiring about design today - basically it's up to the client to provide the design - what pictures and text do you want to put on the small screen? that's your design.

Even if you put a note on the small screen telling people to visit our Hd/Desktop version for more cool effects they would enjoy - many (most?) people don't even have a way to access a large screen for computing.

Maybe there will be a shift with people casting to TVs and 'smart tvs' browsing better and turn the tide - then we'll need some kind of 'report back framework' to tell us people can see and navigate via the larger view - then we could try to remake flash coolness via html video for the big screen - but then we wouldn't know how close they are to the TV - so how large would navi buttons and pop up text need to be.. meh - back to basic plain stuff.

Personally I think we need better zoom-in controls for browsers that people can easily use and understand - this could give us more flexibility as designers, like the hamburger menu - if you can expect people to know how to use it - not holding my breath though.


Plenty of discussion has been had around incentives and I agree with all that, but also there's another angle here: aside from dark patterns, I think the problems of terrible front-end experiences are most prevalent in the middle of the bell curve - the front-end products developed by rather mediocre skill levels and budgets. Which is usually a pejorative, but thinking of it as literally in the middle, not great but not terrible, a wide range of skill levels where people are more or less competent at their jobs. These are where I start to notice the worst effects of the median trying to adopt the technologies and practices that high skilled developers at good companies with huge budgets are able to utilize effectively, or outright make themselves, especially when there is so much churn that being "up to date" with technology is like a pipe dream to most people. These median teams inherit all the complexity, but due to budget or time pressures, etc. aren't able to overcome it effectively to produce a product with high quality, and users feel the sluggish experience and bloat that comes as a result. This pertains to more than UI of course but eventually manifests to users that way.

Something feels wrong. The median should be able to inherit the benefits of technology produced at the high end without such inheriting massive costs.


The implication is rough, but it is part of the truth (not all of the truth). However, I will add that the anointed ‘high-end’, your lifelong long backend devs, devops, data engineers, etc are adding to the problem. Backend people taking up the frontend hat under the fullstack guise create tons of awful end results, and this is even more common because they already exist at the company and are given a blanket ‘competent’ rating (in other words, they get a chance to mess with the curve before anyone else).

All of this is part of the pervasive problem that your post is slightly tainted by (my attitudes are tainted by it as well), which is a general disrespect for frontend. No matter, the truth of these attitudes will show up in all of our products. It’s always obvious to me which teams take frontend seriously and which don’t (the quality is always literally visible).


I do have a bias against front end development being from my perspective quite a mess, in the same way that physical scientists often look at the social sciences, though the social sciences are still quite important. But for what it's worth, I consider myself somewhere in that median band, and my perspective is motivated by and applies to front end as much as other parts of software dev/delivery.

I agree you can tell the difference, though I would place blame for those situations more on an organization than the developers who get shifted toward that work out of their specialization. It could be some dubious values, but it also could be project budgets.


I miss the passive nature of the web. A page got loaded and then it was idle. Now I can stare at a page of text with my cpu spinning at 100% in the background, it adds nothing extra to my experience except for a quiet hum from the CPU fans


Not to mention animations. Adding a second's hand to https://mro.name/o/testbild.svg makes 25% CPU load on modern hardware. Ridiculous. But even more so, that it's often enough done anyway.


It makes me really angry when a simple informative site is a bloated SPA app for no good reason.


It's not just simple content sites.

The other day, I placed an online order for curbside pickup (with a national brick-and-mortar chain). When the time came, I drove up to their curb. But I couldn't access their "I'm at the curb, bring out my stuff" page. It just kept loading and loading. I had to go into the store to get my order.

When I got back to my PC, I discovered a massive webpack bundle on that page, along with an absurd level of tracking software. The page had transferred 50MB before I could even use it.

They knew their curbside page would be used exclusively by people on their phones, in their cars. They knew some of those people would pull up to stores with poor cell reception. That page should've been as light as possible. The page was ultimately just a simple form, so it should've been possible.


In many of the cases I've seen the company would've been better off with server rendered markup and perhaps some light client side validation (vanilla JS or jQuery). I'm at a loss as to how they ended up with this instead.


Where I work it's because we have all of these frontend developers and they need work to do. We keep sticking with ReactJS as the default because we have these folks and theres a snowballs chance in hell that they'd go, from their perspective, backwards in terms of the technology they use to do their jobs.


Me too. Recently I was visiting what should have been a "simple" site, but for some reason maxing out my cpu because of some refresh loop. Maybe a bug, but the whole site could have been static html.


The tweet is too vague so I don’t really know what they are referring to.

Reddit is slow now because they made what should just be static HTML that is updated occasionally into a web app for whatever reason.

React is much better than any alternative. I tried using native js but you really need all of the things react brings. Now you could go back to server-side, but it isn’t suitable for web applications. There are just too many interactive elements that don’t work with a HTML push to client and reload on every change approach. Server side plus client side is a lot more work and hard to maintain.

Still, I really hate front end development.


I started out working primarily on the front end when React was still in its v0 days, and it still amazes me how many options there are to build a simple form, how often it’s done wrong, and how often what could be a 2 hour MVP turns into sprints worth of bike-shedding on the tools to use.

I now work pretty much exclusively on backend systems and I’ve never felt better.


That is why I am so happy to have mostly focused on Java and .NET and their SSR stacks.


It's hard to argue with that short textual message after it took me 3 attempts to finally load it. The quote for those who still struggle to access it:

> I look at the web today. Not as a programmer, but as a user of broken sites that are unable to obey the most basic rules of navigation and usability, terribly slow despite the hardware progresses. And I can only think that modern frontend development has failed.

Though it's probably not exactly a failure if usability, accessibility, and/or speed are not primary objectives for most, and discussing just that once again probably won't be very useful for fixing it.


I only have experience with backend non web development, so my experience may not be worth much, but...

Setting up a frontend web app with React (or Angular) is insanely hard to self teach and I would consider myself very good at self learning. Its hard to workout how to integrate React with Bootstrap and you often start to get problems you are not even sure how to fix. When you look at tutorials online, they all have slightly different tech stacks which complicates this further. Second to this working out how and where you get your graphics from is impossible for me.

Frontend blows my mind! (And this comes from a guy who has done +1000 leetcode problems for fun)

Also, in this thread people say this or that is bad without specifying examples of websites and a problem which could be fixed. Is anyone able to name 2 to 3 concrete websites and examples from which others can look at and learn what is wrong?


I can sympathise with this. I have no professional experience with web technologies. Whatever little I have done has been a hobby. And I can never learn anything "generally", the way I know how to make a C program.

Every react/angular tutorial throws a bunch of high concepts at me, and then a list of commands that will do some magic for me. I am unable to use this info to start my own web project and translate whatever is in my mind to the screen. Everything I have been able to do, has been by modifying existing examples/projects.

Like you, I consider myself to be a good self-learner. I was able to go through a couple of rust tutorials, and with generous consulting of the manual, I am able to write simple rust programs, and figure out how to do what I have in mind.


Others have mentioned the trend of making everything a SPA, which I agree is a huge part of the problem -- it is a cultural issue, not a technological one. Most apps only need to be HTML with minimal client-side scripting. People who teach front-end need to start out by teaching people how to use static site generators, rather than immediately telling everyone they need to start learning React or Vue.

But I'll also say that the web as an app platform was fundamentally broken from the very start, and this is because it was built on a fundamentally bad programming model. *Why is it* that everyone feels the need to use use virtual-DOM frameworks and CSS preprocessors for all the apps they design? Why is it that whenever I build an app for the web, NPM needs to download and compile around 10,000 modules (no exaggeration) in order to build the app?

JavaScript was originally going to be a Lisp-like language, but OOP was in fashion at the time and so it was made to be more like Java. Had it been a Lisp from the start, at least CSS preprocessors would never have become a thing because with Lisp you have macros. I am not saying the web needs to be based on Lisp, although it would definitely help if things started moving toward web assembly so people can start using better programming languages and frameworks than just the ones that compile to JavaScript to program front-end apps.


You try writing a UI that:

1. works visually at all screen widths and resolutions

2. returned from a server with zero knowledge of said screen widths and resolutions

3. without shipping redundant assets

4. responds immediately to user input

5. but all visual updates go through this slow and crappy DOM API

6. oh, and it's all single-threaded

7. and there's no standard library for stuff like virtualized lists, modals, etc. so have fun picking your own and shipping it down

8. and your options for navigation are either losing all local state at each navigation, or writing a SPA that some neckbeards on HN hate for esoteric purity reasons


I really dislike these posts. Not because they're wrong. They often have some truth in them, albeit an uncharitable, selective truth. I dislike how these posts just keep on complaining same tired points. I dislike how they don't offer solutions. I dislike how lazy they are. If you're going to offer criticism of front end, then please include some strategy to fix these problems. It doesn't do much to repeat the same "muh SPAs are too slow".

And yeah, I understand that proposing solutions won't fix your state's DMV site from sucking, but it's at least constructive. It at least makes it feel like you're trying to help instead of keyboard criticizing. Even better, build products with these solutions! Plenty of users would love fast, efficient sites. I know I love it when Hacker News opens instantaneously.


In my opinion, it's a question of what you want to optimize for, speed of execution/accessibility/responsiveness or speed of software development.

The modern web is a stack of abstractions that separates the code in what the site is written in from the underlying machine that renders it. This stack of abstractions makes it many orders of magnitude slower than it theoretically could be to render, but makes it many orders of magnitude easier to program. In other words, this seems like the age old debate of "interpreted vs. compiled" [0].

While the front end ecosystem is sprawling and modern web pages are slow and clunky, the barrier to entry, for most people, is at an all time low. It's easier now to create a web page that works consistently for the vast majority web browsers with minimum effort. The render speed and accessibility might suffer, but this is always the price of abstraction.

Jonathan Blow has a talk specifically lamenting this point [0].

[0] http://blog.gmarceau.qc.ca/2009/05/speed-size-and-dependabil...

[1] https://www.youtube.com/watch?v=oJ4GcZs7y6g https://www.youtube.com/watch?v=FvBySbGueww https://www.youtube.com/watch?v=URSOH4F3vbU https://www.youtube.com/watch?v=MlIZ2Q3Rvc8 https://www.youtube.com/watch?v=jpk9Q5gCyIY https://www.youtube.com/watch?v=xK6YZ3NFjuE


This isn't a development failure. Content businesses have been struggling to turn a profit online for years and we're at an inflection point now. Some sites are opting for squeezing revenue from every spare pixel and bombarding social media with clickbait. The rest are going to start pushing subscriptions really hard. And there's going to be a widening gulf between the two.


I'm designing and developing web sites and apps since 2005.

First with PHP (Wordpress, Laravel, Yii), then with Ruby (Rails, Sinatra), now with Javascript (Gulp, Gatsby, Next, React, Lodash, Immer, Typescript). I also do / did static sites with Jekyll, or my own framework.

Among all Javascript is far the worst experience. In all areas like the language, the types, the state, the hooks, blogging engine, toolbelt, hosting, packaging, publishing, documenting, the community.

The situation is so bad that I quit. It eats my nerves. Instead of creating I'm patching and fixing bugs all day.

That's why the web is what is it today. It's Javascript driven, and Javascript is totally flawed.


It's just so easy to write horribly structured code in JS. It's where PHP was years ago, where everything was just slapped together, and you just pray for no errors.


This sentiment of "modern web has failed" is echoed often but it is always a generalization. Do I want to stab my eyes out every time I go to a news site? Probably. But there are good websites out there, many of which are "modern." It's not about web as a technology or frameworks or even "webdev culture." Some companies just decide having a good website isnt worth it.

Sidenote: I also think there might be too many websites in general. I always kind of think between Wikipedia and archive.org and other information Freedom projects that's all we need. Maybe a little IRC as a treat :)


I'm bothered by a lot of contemporary web development. Many things are implemented using far too much heavy JS code, and user experience suffers. I also think a lot of people drastically underestimate how much we get out of modern tools.

In a followup tweet[1] and in this thread, antirez says that we haven't improved interaction in 20 years.

Let's consider gmail, which is close enough to 20 years old. But at the time, it was an example of a very rare and complex app build on top of ajax. I don't personally quite remember this, but it apparently didn't have rich text editing for a year after it was released.

Ask yourself: how long would a competent front-end developer need to make a decent 2004 gmail clone using modern front-end tools?

The ideal would be to build more lightweight websites, but exploit modern tools where they pay for themselves. While we're at it, we can write fast software without premature optimization.

[1] (https://twitter.com/antirez/status/1378274076859502593)

[2] https://en.wikipedia.org/wiki/Gmail_interface#cite_ref-RichT...


From defining the entire approach when it launched, Gmail has somehow turned into the single best argument /against/ SPAs. I dread opening it, because I know it's going to be slow to load and slow to use.

I don't know how it went so wrong. Original Gmail was a revelation. Today it's sludge.


... I just loaded Gmail on an empty cache and watched the Firefox Network pane: it made 304 requests and loaded 29.65MB of stuff (9.8MB compressed) - taking 8s to get to "loaded" (on a very fast connection) and over two minutes before it stopped loading extra pieces.


Try to use the basic html version. It is a masterpiece of UI design and is blazing fast. https://mail.google.com/mail/?ui=html&zy=h


I would go a step further and say this often also applies to GUIs in general and not just websites. Because too often the problem is not the framework, it's the little amount of thought that goes into the UI. If someone spent time on it usually it's so it looks a bit nicer, actual considerations about workflows and expectations of real users seem to be ignored completely.

And then there are user interfaces where the developers actually tried to make a difference but stopped somewhere in the middle, like with current Firefox versions. Don't get me wrong, I think the UI customization feature is fantastic and I love that it now supports a large zoom indicator/control in addition to larger address bars and tabs - as the first line of support for my relatives I appreciate that this is now more visible. But then there's that tiny download button/indicator which, for less tech-savvy people, is often simply invisible. As much as I dislike the huge bar at the bottom that was introduced with Chrome, I have to admit it is better out of the box than a tiny blue arrow that desperately tries to be more visible than all kinds of website animations and ads.


I've been working over a year as front end dev and my main complain is that I feel the whole thing is a bit unstable. How do you choose the right tool when it comes to JS? You can go vanilla, use jQuery, Angular, React, Vue, Svelte... and that's just talking about the big ones. CSS seems a bit more stable with BootStrap still having the biggest share and Tailwind coming as a strong contender. I'm pretty sure we all thought about JS frameworks first when we read 'front end development has failed'. But how about UI/UX? We had 4 designers so far at work. One of them was really good and left. The others were good designers, but not good web designers. Modern design is bloated, sometimes not intuitive because we all want good looking things, who cares if they aren't as functional. And after all, when you visit a website, not as a programmer or a designer, as a consumer, what do you want? Speed, usability and content. You don't care about animations, parallax effects or if the company used jQuery or React. My opinion is that there are use cases for certain technologies but sometimes we overdo things.


You cant even find a website that passes html validation these days.


Validation has been unimportant for about 20 years. That has nothing to do with why websites are bad nowadays.


Accessibility?


No one said its the reason, its just sign of how bad stuff is.


No it's not, validators are extremely outdated. It's not "bad" to forget to close the body tag, it actually doesn't matter at all.


"Validators are extremely outdated therefore not following HTML specification doesn't matter" :facepalm:


I like valid HTML as much as the next guy and try to always write valid HTML myself, but it’s not the be-all-end-all: if Safari, Chrome and Firefox can parse your site 99% of users are happy. No end-user cares if you have charset set on a script that doesn’t have a src attribute, or that you’re technically not supposed to put style tags inside noscript tags.


The problem is that nobody can create anew browser engine because you not only need to implement all the standards you also need to make your browser work with broken html and work in the same way Chrome and Firefox work.

IMO we would need 2 split the document part from the application part. Have a set of elements and a document format just for blogs and news websites , and more complex components and APIs for application websites. Then you could have a browser that would be more simple that will be enough to browse blogs,forums and news sites and a more complex browser for SPA applications.


That first part is the Web, while the second part are native programs with Internet connection.


I think you could have web rich documents with basic interactivity and as an option web application that are "native app like" , with tons of APIs for camera/microphone , bluetooth, canvas, 3d , complex sound APIs.

We have to admit that rich web application can be very usefull, even during Java or Flash days we known that sometimes you need an animation or a simulation on a page to better present some content.

Though it makes you think if we had an 100% open source Flash, then you could have used it for complex SPA like Figma while keeping all the bloat into an optional plugin , if the plugin was optional developers would not be tempted to use it unless is needed and it it was needed they would have use something like Flex (a Flash GUI tookit) and not reinvent the wheel constantly.


Sure, canvas is appropriate for basic animations/simulations.


I know that its not functionally required but after who knows haw many years of "html isn't written by humans" why is it still not following the rules. I get that humans make errors, I get that they are lazy and dont add alt text and all that stuff. But software that exists to generate html should clearly generate valid html.


> sites that are unable to obey the most basic rules of navigation and usability, terribly slow despite the hardware progresses. And I can only think that modern frontend development has failed.

We have more information and services available than ever before. And it's never been cheaper or easier to start something new.

I don't know what "basic rules of navigation and usability" the author is referring to, but I virtually never come across a site that can't be navigated or isn't usable. And all software gets "slower" as hardware grows faster [1], it's not anything specific to the web.

The web is amazing, nothing has failed, sheesh. Yes there's always going to be room for improvements both in speed and usability -- developers are human, nothing will ever be perfect -- but the idea that the most popular digital interface technology of all time has "failed" is ludicrous.

[1] https://en.wikipedia.org/wiki/Wirth%27s_law


> And all software gets "slower" as hardware grows faster

Quoting a statement doesn't make it good. It's bad.


I'm just saying it's not specific to the web. It's not some specific failure of front-end technology compared to others. It's not a failure at all, nor is it amazing, it's just par for the course.


I think that this branch of engineering is still in its infancy (like electrical engineering int the times of edison). I am courius how front end development will look when dust will settle, maybe in 50 years?

Maybe todays, heated disucions will be setted for good? Or something new, better will be invented?


For me it failed mostly because the complexity involved. Even a rather simple interactive website requires weeks/months of coding & testing. WYSIWYG editors don't cut it, and we need to implement same things over and over again (i.e login).


I believe a part of this is due to a pipeline issue. I wanted to become a web developer about seven years ago. I decided to read articles, watch videos, and attend some online courses to learn all about it. I even decided to join local developer sessions to see what the community is actively engaged in. Not once did I ever see or hear about developing a website as a traditional multi-page site (MPA). Literally everything was about SPAs. React, webpack, etc. Naturally, I wanted to ensure I was learning employable skills. I didn't want to learn how to make MPAs when that methodology is soon to be obsolete and going away like the dodo bird — or so I thought. Sadly, I think many new devs are like me. They follow an online tutorial and learn how great a framework like Django is. But the tutorial guides them into serving the site as an SPA, completely ignoring the fact that Django has a highly practical and powerful built-in template system that makes web development easy for most use cases! This has left an impression on me that this community has a "shiny object" problem. We throw away perfectly good tools and methodologies for the next great thing without thoroughly thinking through the use case to see if the increased complexity is justifiable. There is nothing wrong with SPAs. It's just that they have become the hammer and nail problem. Not every website should be an SPA. Just like how every website shouldn't be a MPA.


Has it failed? Or is the current state that people prefer “native” apps to this kind of rich interactivity with long lived state? And instead the browser is a place that feels like it’s going to load a somewhat interactive document. The front end work here is often seen as grunt work on top of backend services and not a discipline in its own right.

So instead the awesome front end dev has switched from making something like Slack or VS Code work great in Chrome to building it in something like Electron for native experiences.


It’s been the same problem since the web design should be pixel perfect days, instead of print design the web is now in love with app design.

The difference now is that there is less good practice. But look at gov.uk and you see the difference between good web design that doesn’t need to be an app or have carousels.

Web development now has the Sharepoint problem: it can do everything given enough effort. It’s not about what is possible but it’s about what you want from it.

Before it was flash intro movies now it’s a shop front with modals and carousels.


The real pain points:

- react programmers who solve every "problem" by pulling in yet another 3rd party dependency

- understanding react vs react with hooks + to redux (or not) + saga? thunks? etc etc...

- webpack


I don't think that web is slow, unless you're running seriously underpowered computer. That's not the case for me. Web is not as fast as it could be. And that's because customer does not want to pay for fastest website in the world. It should be fast enough and then features matter. You can spend year making your website load in 0.1 second while your competitor will implement important features and his 2-second loading website will win.

And that's not really about web. It's about software in general. Discord launches in 8 seconds. Ventrillo launches in 1 second. But people will use Discord, because 8 seconds is not that bad and then features matter.

And that's not really about modern software, as old software was slow as well. Also it was very buggy. Windows 98 used to BSOD here and there, so you had a routine to press "Save" button every few minutes and keep few copies just in case. Emacs was expanded to "Eight Megabytes And Constantly Swapping".

There's some old software which survived till today and it's now wicked fast, because it was made for old hardware. But people still prefer to move on to new software, because of features or just shiny new design.


reddit.com is a slow and flickering mess even on my gaming PC. It's the worst example that comes to mind though. But if that same PC can run incredibly complex open world games while simulating (for instance) an aircraft complete with the cockpit and displays driven by emulated onboard computers at 60+ fps, it's not too much to ask the same from a webpage that "just" needs to do text layout and display a couple of images.

Also: it's considerably faster and cheaper to build a fast webpage than to build a slow webpage. I'm quite sure the "new Reddit" was a lot more expensive to build than the "old Reddit".


I am curious what the traffic breakdown for reddit is via the ‘old’ subdomain compared to the ‘www’ subdomain?

Personally I will navigate away from the new site and only use the old domain.


It's not slow for me. Took 2 seconds to show me content.


2 seconds is pretty slow to show a webpage though (assuming your not on a 90's analog modem connection). Also try scrolling, navigating, clicking on links etc... It feels like moving through molasses.


Most software I use wasn’t written from scratch in the last 10 years. It’s new only in that of people are still maintaining it, but I don’t get to decide what features I want it’s an all or nothing buffet.

On a new an powerful computer most of the web is dog slow. There is no reason most websites should take more than 1/10th of a second from click to fully rendered. Anything past that is simply wasting my time.


The hardest thing is making things pretty and flexible while maintaining ease of development. Newer UI kits like Flutter are basically architected like game engines. This is a challenge that UI framework designers will have to confront if they want more native alternatives to electron. Whatever replaces electron must at the very least be similarly easy to use, accessible to many languages and runtimes, and trivial to style with non-copyleft usage license.


All of that is true until it isn’t. Ask all the stock trading apps how Robinhood out-UXed them. When the out-user-experienceing happens, you pretty much will lose your entire business if you don’t undertake a paradigm shift. Stay ahead of the curve a bit.


I think it could be argued that Robinhood’s UX is a sort of feature on its own. Stock trading isn’t simple by any means, and Robinhood found a way to make it really approachable for everyone and that’s almost admirable. Everything else about their business sucks, but that one piece was truly impressive.


I’ve echoed this point a few times too. There’s a lot of apps that can barely figure out a todo-list (every glorified note take app/PM software), and here’s a company that boiled down Derivatives (with a capital D) trading in about two or three screens.

It’s an absolute masterclass.


I’m personally impressed by how terrible the other trading apps are comparatively. Obviously they have more features etc but you’d think Vanguard & co. would try to follow in their footsteps.

Vanguard’s beta app, Beacon, is a step in the right direction, but I can already tell it won’t even compete with Robinhood’s design.


I think it was more like ‘here is a free margin account with options enabled and no commission fees’ went alot further than the UI.

Also, the UX was implicated as the cause of a suicide, so that is not good.


I thought robinhood got popular because they marketed the shit out of comission free trading for the little guy.


Software development in general has failed.

20 years ago I had an iPAQ, which is basically a handheld Windows machine. Today I have an Android. 20 years ago, the iPAQ had 32 megabytes of storage. My Android, if it only has 32 gigabytes, is used up in a month by just a couple of apps.

I could do the same things 20 years ago with my iPAQ that I do with my Android today. Yet today it requires 1,000 times more storage space and 100 times more processing power.


One thing not mentioned often enough about SPAs is how complicated they make simple operations. In this component - https://gist.github.com/polydevuk/96d89642f114707b2f1a0cc316... - I have a select box for tzdata Regions (Europe, America etc.) and when a region is selected it populates a Locations menu beneath it (London, Paris etc.). Simple in jQuery or pure JS. In React, however, once a region and location has been chosen it caches some kind of hidden index value for the selected option such that a subsequent region choice (without a page refresh) selects the item in the new Locations menu with the same hidden index value as the previous selection instead of the default header option. What was initially simple HTML + minimal JS becomes a complex attempt to manipulate React-specific magic. Using react-form? Fine .... until you need to mix it with react-select or material-ui then you're in for a world of pain deciding how to mix controlled and uncontrolled components. Add Redux into the mix and you're adding orders of magnitude more complexity. Frankly, I don't have the time. I would much rather be using that time to add features to my Rails back-end and spend time with my family. Back in the late 1990s and early 2000s CSS browser compatibility and DHTML was the big time-sink. Now it's SPAs. Back then we dreamt of a day when web standards were fully supported. Today that dream is a reality but you wouldn't know it because we've just made everything even more complex.

The only development which I can see saving us from this mess is Stimulus/Hotwire from the Rails team and implemented in other frameworks such as Djano, Laravel and Phoenix.


It took me a while to understand your React example (or at least I hope I get it now). But I don't know… I agree that React has some pitfalls and lots of people don't get it right (especially forms!) but there are ways (clean ways, in fact) to solve this. It's not React's fault, though. Handling complicated state has always been hard and it's not exactly easier in jQuery or pure JS.


But in this case, which isn't atypical, it WAS easier in jQuery. Much easier. I had it working fine with jQuery but wanted to see if I could make it work with React. Here's the full component: https://gist.github.com/polydevuk/96d89642f114707b2f1a0cc316...


That doesn't look bad to me, though. Pretty much 90% of the code is just HTML, anyway, and I doubt it would be much shorter in jQuery.

If you allow me a few comments on the code:

> .then(res => { setYou(res[form.name1].planets) ; setOther(res[form.name2].planets) })

setYou, setOther will usually each trigger a re-rendering. It would be better to store them together if they're always changed together.

> .then(res => document.getElementById('results').style.visibility = 'visible')

You might want to look into `useRef()` here.

> <select {...register('region2')} onChange={ e => setRegion2(e.target.value) }>

This is what I meant in my earlier comment: Each change will now cause a re-rendering of the entire component. I think it would be better to split up the code here and move the <select> tags into a separate component with separate state. This component would then only tell your parent component about a state change once both region1 and zone1 have been set.


Great advice. Thanks. Somehow I think it proves my point, though - that SPA frameworks lead to endless complexity making the whole front-end industry the province of senior developers.


Soon after Web banner ads appeared, researchers discovered that users quickly learned to tune them out.

Sometime after that (after clumsy false starts), what used to be advertising people took over the nature of the Web.

HCI and human factors engineering (i.e., technology in service of the user's goals) was largely forgotten, and UX (i.e., technology in service of the developer) was born.

Be skeptical of most UX you've seen, since much of what we're now taught, much of what tools/platforms are available, much of what the jobs are... are now generations away from the needs of users. If you want to genuinely serve the needs of users, question everything, including all the implicit and explicit conventions of your field.

Product concept and design are in some ways harder than they used to be, since, e.g., users' conceptual models, fashion, and available platforms/technologies have been shaped by a pervasively user-hostile and manipulative environment. But, as designers, you still have a lot of leeway to operate creatively within that. Question everything.


Modern frontend development hasn’t failed, the industry has. Frontend development is looked down on as “not real programming” or “simplistic” but it takes years to master and requires skills of both an engineer and designer. Leetcode and bootcamps will not prepare you for architecting a high performance, scalable, accessible, SEO optimized, visually appealing, usable experience.


It has?

I can think of very few areas in software development where there is this much innovation & diversity in frameworks & approaches to building something. Maybe our approach to usability has failed with all our "necessary" popups & cookie dialogues - there I agree. But to say "front end development" has failed simply because its evolved is a bit myopic IMO.


"I can only think that modern front end development has failed"

I think web devs have no real incentive to improve their front end since search engines don't stimulate and reward them to do so.

Ranking algorithms are blackbox but you can notice that popularity is more important than quality of content so eventually web gets bloated and you end up in information overload.


This infantile false dichotomy again. We get it; BE engineering is far superior to JS/FE engineering so therefore anyone belonging to the BE/non-JS tribe is a far superior intellect and person. We’re glad you feel better about yourself.

I challenge you to be the change you want to see: 1) build something better 2) get wide adoption of it.


I'm amused by certain "big names" coming in and responding with arguments about how front-end wasn't that great back when either.

This idea that modern front-end has failed is not some absolute statement that means that all that came before was a success. In fact, it's a statement that with all our modern technology and idea's, we're still in a situation where we haven't improved on front-end tooling and development.

Also, just because front-end has exploded and created a huge amount of jobs, doesn't mean we're doing it right or well. There is just as much evidence to suggest that we need so many front-end developers because our current tooling and methods have made front-end development a massive job requiring many people.


What we're missing in front-end development is more abstracting away solved problems so you don't have to think about them ever again.

Shared frameworks and libraries do help significantly with this but the fact that they still undergo substantial churn tells me they haven't completely solved the problems they address. When's the last time you needed to drop down into Assembly to fix your JavaScript? Probably never—you can operate at that higher level, which lowers barriers to entry and increases productivity for developers. (Put your hand down, you wonderful machine code hobbyist, you.) In contrast, we still "drop down" from web app components into JS/HTML/CSS all the time.


Most of the issues I see in front end code are repeats of problems that were solved decades ago, some of them even 100 years ago! (debouncing).

No separation of concerns, no layers, or if there's layers there's controller code in view layer or view code in controller layer.

etc.


I can only think that modern power generation systems have failed. There are so many advances in power generation in the last 100 years - from nuclear, wind, solar, hydro, and even biothermal. Yet when I look around all there are so many dirty combustion engines and coal power plants spewing greenhouse gasses and toxic fumes.

Both statements are moronic. Taking advantage of modern design philosophy, modern CDNs, modern framework - all of that takes time, attention and MONEY. If you don't put in the investment in all of these, the fact that you have access to "modern frontend development" is meaningless. It's like asking why is not everyone in the world driving Teslas.


If your thing is considerably worse than any (handwritten) HTML page, maybe you are doing it wrong? Well written HTML even without CSS is fast and accessible and this should be your baseline.

Modern CSS is extremely powerful and you can get good looking websites with just handwritten CSS. If you change your site once a week instead of uaing a CMS maybe consider using a static site generator like Lektor (fast, no need to worry about people hacking your wordpress instances).

If you really need server side code/interaction reduce dependecies and still let HTML do the heavy lifting.

I did some experiments with WASM and Rust and it is really neat, but IMO it doesn't beat the simplicity of a basic HTML page yet.


Big web companies killed open protocols (remember reading news via RSS?) in favor of exploiting and evolving web technologies as a way to push effectively proprietary UIs with spyware builtin.

The whole web was based on the premise server (media provider) and client (media consumer) could be created by different parties and should therefore assume nothing and rely on protocols - that’s not what happens today since with JavaScript the same party effectively “takes over control” of the user agent.

My radical opinion is that open protocol based web was killed the moment JavaScript adoption became standard, the same way an open web wouldn’t be possible with Macromedia Flash becoming a standard.


That’s a really negative, glass is half full viewpoint. How can anyone not see the incredible wealth of unbelievable awesome that is the modern web?

How can anyone use 200 popular websites and see shit instead of being astounded by how awesome it all is?

I look at the same thing and see unbelievably powerful web applications, the web browser as the most sophisticated fabulous software ever built (alongside Linux), a constant drive amongst front end developers/libraries/tools for new and better ways of doing things.

And I see so many websites that it’s impossible to make sweeping generalizations about front end development.

The world contains certain people who want to see the bad... this is what’s happening here.


Hi all! We are a team of researchers working on a new digital product. We are calling for Front-End and Full-Stack Developers with fluent speaking English to participate in our study. If you are interested please fill out and submit a short screening form by following this link: https://forms.gle/1BJhD7jjtNtKNn6M9 For the selected participants, you can expect a ~30 minutes long interview next week at your availability. As a reward for your time, we will be giving away $100 Amazon gift card upon interview completion.


It still surprises me that twitter, one of those high-flying SV companies that overpays massively for talent, still has a truly terrible UX on low-end hardware. Opening a link brings my (admittedly old) iPad to a halt.


There have always been bad developers. Unless he means that he expects that by now, someone would have developed a completely idiot-proof infrastructure for making UIs, we can always assume that there will be people making websites who do not have the time or inclination to learn the tools and technology and make the wrong decision at every turn.

... And even though some idiot-proof frameworks exist, they still require somebody to put enough effort in to discover and use them.

I mean, we can apply the same criticism to the giant pile of crapware in the mobile stores. Some software is good, some ain't.


It's never too late to consider server-side web technologies (new or old).

We would have closed up shop a long time ago if we hadn't discovered Blazor and all of the productivity gains that go along with it.


As a recent example : Substack is worse than Wordpress. And it's probably related that you can read Wordpress comments with JavaScript disabled, while not Substack comments.


I want to agree with this, but I do have reservations.

The competing priorities of anything are hard to balance. Try building a physical store sometime. Even a simple garage sale. Layout for navigation and perusal is not a trivial problem. Especially if you don't have the space to make comfort a priority.

To that end, how much is actually broken, versus you notice the maintenance costs? Ever looked at public restrooms? Are they broken, it just under funded?


If I were to just look at the bookmarks in my browser, I'd get the impression too that frameworks are useless. They're all a bunch of sites that render pretty static content.

But the app that I work on in my day job (that I don't personally use) could not possibly be built without a framework.

td;dr: a lot of things don't need a framework, but they are still very necessary in many cases. Likely not ones that regular consumers are exposed to though.


All software development has failed. Everyone is mindlessly copying megacorps with terrible development practices. Then when their crappy systems fail constantly, they blame the tools and demand even more complicated tools from their corporate masters... Which make their code even less reliable.

Tooling is not the problem, the problem is you and your selfish subconscious urge to create more complexity to create more work and more income for yourself.


I don't think greed has much to do with the complexity. Insecurity seems to cause a lot. Devs feel if they don't go with all the latest stuff to all the nines they aren't worthy.


I have to agree with this, but I think insecurity may affect junior developers more so than their seniors. Most of the new, in-demand tech that companies require of their applicants can be learned within a week or two, to be able to successfully contribute. That is, if you're already experienced with a number of complex frameworks or libraries. Then, It's simply a matter of reading the manual to figure out how to move the data from one place to another.

On the other hand, for a junior dev trying to break into the workplace, selecting the correct technology could be considered an important investment of their time, and ultimately their livelihood.


you are probably right that there is a lot of complexity for complexity sake. it’s sad actually. also solving problems that you don’t have and you’ll never going to have (k8s I’m looking at you)


Loads of comments in the Twitter thread trying to pin the blame on someone. It’s product managers; no, its frontend developers; no, it’s backend developers; no, it’s users…

An organisational way of thinking is to figure out whose blame it is and charge them with fixing it.

Me, I don’t give a crap who’s to blame. Just fix it.

Stop collectively waiting for someone to do something about it and just do something about it.

Fix your own code. Make your own code better. Be an example to the world.


I know game developers who lament the passing of Flash. Though the runtime had issues with efficiency and power use, the programming environment was great for games, in terms of being truly write once, run everywhere. Even networking had serious quality of life benefits for both devs and users.

The equivalent in Javascript is an order of magnitude worse in those regards. It's ridden with compatibility problems, everywhere.


Yes, just as cooking has failed. With so many recipe books how can you have bad cooks?!

Just as architecture has failed. Just look at those buildings!

Art has failed! What happened?! Where are the Michelangelos? The Leonardos? The Picassos?

There are many more people writing code now. History will only remember the greats, and forget the rest.

There were always websites that loaded poorly. Even "back then". And there were good ones too. Same as today.


Some things that have changed:

1) We've gone from function to content mostly.

'Apps' used to mostly deal with files, like Word, otherwise, hey were standalone. Now we deal with an incredibly variety of content.

2) We've traded consistency and reliability for speed to market, and rapid evolution.

App releases were once a year, web releases can be daily.

In other words 'the web' is not 'about apps', it's about something slightly different.


Anyone remember http://www.useit.com in 2000? Bring back Jakob Nielsen.


I have around 25 years of web development experience and I'd also put some of the blame on PMs and growth hackers who want to build rich UI experiences where often a simple site would suffice. Add UX people with limited knowledge of HTML and CSS, or backend engineers who only want to build APIs. All the trackers and auto playing videos do not help either. Ah well...


These type of comments always come from people who have zero real life experience working in the intersection of code and business.


modern front end development is very prone to producing shit. and it can happen in a very short time, not in years. If user experience and performance are still important to you, you can do good things. But if your only concern is to try new things and mess with your application, the end will be producing shit.


I have a feeling that I am supposed to keep this a secret, but if you are tired of broken and slow websites, check out the Gemini protocol as a break from the BS.

It's not going to fix the web or replace it in a lot of cases, but it's nice to have an alternative for some things.


If you share the feeling, join the small Internet movement: https://cheapskatesguide.org/articles/small-internet.html/


The tools exist the solve these problems: blame developers.

TypeScript and modern frameworks with contemporary understanding of app design kills nearly every historical pain point of web development, but many are stuck writing jQuery and using WordPress.


I usually read toxic undertones of gatekeeping in tirades like these. "Real engineers like me wouldn't produce such garbage, and frontend engineers aren't real engineers because they learn and like this stuff."


I mean the code / tech isn’t the issue really. A newsletter pop up a cookie banner an Adblock warning and inline flashing ads later you can’t even see the website. Experience sucks but that’s not due to bad tools


As someone with very limited resources I keep looking for cookbooks to do UI/UX tasks. Any actionable checklist of these "basic rules of navigation and usability". Sorry the basic question.


I think that front dev is fantastic today, especially for amateur devs such as myself.

I used to do some HTML and CSS 25 years ago and it was a nightmare to make it look like I wanted.

Today I use Vue+Quasar and everything just works.


> I used to do some HTML and CSS 25 years ago and it was a nightmare to make it look like I wanted.

Did you ever stop to think if the point perhaps wasn't to make it look like you wanted, but like the user wanted?

I think that's a major part of the problem with the Web in general: A severe case of designitis. The web was never supposed to be a glossy magazine. The server was supposed to serve up content, (mainly text), and then the browser was supposed to render that content according to the user's settings — by all means, let the artsy-fartsy types use some fancy designer font; but above all, let those of us who are getting on in age use a slightly bigger one, so we can frigging see what it says in the first place. Compared to that, I don't give a shit if it messes up the precious "flow" or layout.

Designitis -- it's the curse of our times.

(Well, that, and advertising.)


The advancement of what is possible on the FE has not given ample time for FE developers to scale accordingly. That I believe is because of lack of standardization across the WWW and the reliance on existing libraries like Babel to accommodate this gap. The variation of how code behaves underneath now changes depending on what library you use and that tends to libraries or developers overcompensating for their code to run anywhere. IMO, we need to stop inventing the new, ensure browsers behave the same across 90% of users and then start innovating again. Sure, we might have to do this every few years or so, but the gap would keep increasing and thus have longer periods of stability.


It’s mostly just ad tech. Sites without ads are usually pretty good.


I think every discussion in this topic is just a collection of of every commenter's inner question "am I satisfied with the project I'm currently working on?".


Websites for giant retailers like Walmart, Home Depot, etc are shit, and it blows my mind. How much are these developers getting paid to make the modern web so unusable?


They don't get paid to make the modern web usable. They get paid to do what their project managers tell them. I see this first hand everyday. I've tried to fight it. I've tried to prioritize performance, usability, etc. It's not a priority for a lot of companies. At my company, the priority is delivering features that the sales team need to close a new customer.


It's the same as it's ever been. The same mistakes, the same things done right, just all in different shaped packages.

Fundamentally it comes down to the same thing: use appropriate tech. Use it because you should and not because you can. Don't build a SPA if you can do it in a simpler way with plain HTML and CSS. Don't build an app if you need a website. Don't bother with some enormous backend horror if your site is never going to need to scale to that size.

It was the same way back with Flash, dhtml, html5, early JavaScript, CSS, Dreamweaver, whatever - if you need something, use it, if you don't, don't.


what a dramatic and ridiculous statement. i suppose you could say i have a vested interest in holding an opinion like that. however the dude that posted that tweet is in the tech space, therefore (imo) excludes him from being “just a user of the internet”. furthermore, REAL USER demands and expectations increased from native mobile app usage, so of course the complexity of pages/apps on the web have followed suit.


Frontend development is fine. People failed at it.

Like they always do at first in important things.

And it’s not even developers who failed, but users and managers who just don’t care enough.


Don’t worry, you can make a server rendered site slow too. It’s not the tech, it’s that “good enough” is measured in seconds for human willingness to wait.


What about advertising on websites has failed us all?

Anyways, ViteJs and Tailwind jit recently are making development fun again. Too bad TypeScript still ruins the party.


Has anyone found a frontend JS framework + backend combination with the developer ergonomics of say Rails, Django or Phoenix? Would love to hear.


i'm currently trying out https://turbo.hotwire.dev/ which is a new library from the basecamp guys, which makes it really easy to create SPA feeling sites with rails, using little to no javascript, and in a very unobtrusive, intuitive way. i'm guessing it's going to be an integral part of the upcoming rails 7 release.


I think the bigger reason here is that devs jump directly into Frameworks rather than understanding the basics of web development itself.


I had hoped the end of Flash would have brought this nonsense to an end but it seems the bozos just moved on to continue doing damage.


Javascript being enabled by default motivates developers to include javascript in their site by default. It was a mistake.


We fail at manual state-management, and the mostly used patterns we are using are complex leaky abstractions.


The worst is the dynamic DOM manipulation, where what you print isn't what's rendered on display.


I want to say this is hyperbole but it’s actually correct. I don’t even have JavaScript enabled anymore.


Wrap them inside Electron, now they are not web anymore, would you have the same complaints?


Yes and no. I both agree and disagree.

For a lot of it, yes, it is a giant mess.

I'll posit the reason modern front-ends are a giant mess has to do with the fact that everyone tries to hack, slash, and transform the document model into an application model. This means they don't want to reload the page, and they then have the burden of manipulating the DOM in a very crude way.

This creates a predictable empire building game of building a framework to make it easier, and every framework succeed to a certain degree. Now, they succeed in a number of great ways, but then they run in the abstraction leakage which causes them to bend in strange ways. For many applications, that's fine, but as the number of applications grow this creates problems for the common case. So, quality gets harder to achieve.

Now, the philosophy that I believe is that we just don't think simple enough. Most people don't think deeply about this stuff (and good thing because we would never get anywhere without people running forward making messes), so what do we do?

Well, I think the real answer may not be to fix front-end development. Instead, I'm coming to the conclusion that we should fix back-end development. Instead of using the tried and true work-horse of request response (i.e. HTTP), we need streams (i.e. WebSocket). Streams let you pair-bond state, and this is exceptionally important.

Ignore the complications of WebSocket for a moment (I don't deny there are problems, but they are solvable; just no good common solution yet).

What a stream lets you do is pair-bond/entangle objects, and this lets you simplify the development model to a great degree such that the front-end object is a proxy to the back-end model. This entanglement then forces you to contend with two things: (1) how do I change the object, (2) how to respond to object changes.

Once you address those two concerns, you can build applications fairly easily. Using this framework, we can then see work towards understanding why modern web development is so painful.

Since objects don't tell you how they change, you have to reconcile that change. This is why things like react have a shadow-dom, but you still have to read state from server then shred it into components for that to work. It's easy to miss an important data point since there is no direct entanglement. Ultimately, you have to take an entire object and then mangle it into a form that is pretty to the user without disruption because the DOM has hidden state (i.e. scroll bars or text selection). Classic web avoids so many problems because that disruption is built in.

Since failures of the request will happen, you further have to deal with partial failures of state changes which is why you generally need a thing like Redux to be an immutable state container so you can rollback state changes. OR, you need GraphQL such that the client only deals with atomic failures. However, most people fuck up GraphQL, so you have to have both GraphQL and Redux.

Things like svelte are better because it uses language techniques to help you shred your data into DOM, but it still suffers.

My claim is that you can simplify front-end development BUT you must also simplify the back-end, and I'm doing this over at http://www.adama-lang.org/ where I'm solving the problem for board games. I claim that board games are a limit point of technical complexity for transactional interactions between people, and partial failures are catastrophic for the experience.

The way that I'm building a game right now is that I have a client which I can connect a giant JavaScript object to the server. There is no proxy, just an object. The server will then get an update, then update the object and tell you about it via a change callback. These change callbacks allow you to synchronize the DOM to the object without an intermediary. The DOM callbacks can safely read the object.

This lets me use vanillaJS without fear.

When some DOM callback wishes to change the object, then all it does is send a message via the stream to the back-end which will (1) authenticate it, (2) validate the change, (3) incorporate the change, and (4) spit out a data change. Much like a database, the UI is a tailer of the log.

The JavaScript library I have is in its infancy since I'm balancing my time between the language, the devkit, the client library, the distributed system design, the game tools, the canvas renderer, and ultimately the first game to launch.... And I'm only working on this 2 hours every evening until I retire, so... it's going to take time.


I'd say we're likely comparing apples to oranges here. It's a common complaint on HN that frontend development is a mess and evolves too quickly, that it isn't "what it used to be". Static pages, small downloads, very little coding overall. What we miss is those kind of sites still exist, but they're not the sites we're complaining about.

Back in the late 90's and early 00's, most websites were largely "brochure" websites. Companies were getting online because they heard about this whole internet thing and they knew they just needed to "get on the internet". You largely saw companies uploading whatever brochure they had basically into a digital one online. Those websites were insanely straightforward. Render a page, static content, the end. Maybe it's a little more dynamic than that, perhaps you're running a basic LAMP stack listing, say, real estate. Query the database, render the page, we're done here. Very little interactivity.

Those websites still exist, but they're likely not the ones we're complaining about. Site's like boeing.com are those traditional brochure sites which deliver very little interactivity. Relatively quick and nimble, the hero image is larger than all the JS they load (and, honestly, they could probably remove most of their JS now - they're using jQuery, modernizer, underscore, stickyMenu, require.js and a bunch of probably unnecessary jquery plugins we can eschew today, but we're here to complain about today, not praise it amirite?).

What we're actually complaining about are usually the interactive sites. Your Twitters, Facebooks, even sites like Reddit that dynamically load content as you scroll, click posts and download more content, comment - all manner of interactivity. Typescript? React? Functional components? Hooks now? Learn Redux? Unlearn Redux? CSS flexbox? CSS grids? My website now has a build process? I used to just edit the PHP file and upload it via FTP. What the hell?

These are not simple "brochure" websites, so yeah, they're more complex. They're applications. And being that they're applications, we shouldn't be comparing them to brochure websites, since they're delivering a completely different experience and have a completely different goal. We should be comparing them to desktop applications and the desktop application development environment, and therefore, comparing their stacks against the likes of C# and WPF, Swift and Cocoa, C++ and Qt or C and GTK.

And compared to those tools, how does the modern web development tool chain stand up? I have no idea, I don't write desktop applications! But what I do know is companies like Slack and Discord have elected to use electron rather than create their products in whatever desktop application framework. VS Code is electron-based and being a Microsoft product they had all the reason in the world not to do that - and it's dominating it is space today. Offerings like Figma, which by all rights one would imagine should have been desktop software elected to be an online tool and has completely shut out the competition.

So if you're going to complain about online applications, don't compare it against an old programming paradigm that would never apply to these products. Compare them against the modern desktop application development experience and let's start the conversation there.


This diagnosis needs more precision. What/who, exactly, has failed? What needs to be changed, specifically? You look at the comments here and you see many different people are taking this different ways.


I started going to Awwwards to find what to avoid doing.


Interesting take.. I love going to that site because I want to see new things that push the limits of the browser. I would love the opportunity to build some of the sites that are showcased there, just to try new things. I don't think these types of sites are "broken" at all. Most of them work very nicely in my experience and use clever ways to hide load time or other negatives. I wouldn't ever use most of the cutting edge ideas in sites I build commercially because the sites I build are for a more mainstream audience that probably isn't tech savvy, so I keep it to the basics, but the awwwards stuff is still very cool to me.

I think the broken stuff the tweet refers to is all the online publishers with floating video ads, 50 tracking scripts, bloated pages, zero accessibility, etc on a page for a 500 word article


Oh, those are impressive websites, don't get me wrong.

But I think that the intent and purpose in design matters. And awwwward website's purpose is pretty vacuous. The websites there are ad brochures. A single short sentence in huge font above the fold, a compelling image - the purpose there is to grab attention, while offering little information. Compare it with densely packed, more honest and less thirsty websites of the 90s and you'll see why a lot of people are nostalgic.


> I love going to that site because I want to see new things that push the limits of the browser.

Browsers are so hugely (over-)capable nowadays that anything that comes even close to "push[ing] the limits of the browser" has to be at least as huge and complex as, say, MS Excel.

99.9... Well, "five nines" ring a bell? -- percent of web sites are light-years from needing anything like that. Sounds like complexity for complexity's sake. Web wankery.


>I can only think that modern front end development has failed

Well, one can just as well say "in-memory stores have failed" based on some impossible to meet criteria of their own device...


Avoid any flavor of 'the sky is falling'.


Thank god I quit frontEnd, the story was like this:

- 2010: dude look this new thing "angularjs" a model/controller(vue?)/STAR*, the $scope dude it's crazy, something dynamically changes in the $scope and HOPP it appears instantly in the page :D

Few years fast-forward, have a watcher in a model that changes something in another model, unpredictable changes down/up, random mutations...duhh da heck?? We were good sending HTML from the server missing those days...

- 2014: dude look this new thing, it is called "React" and you know what?, forget about the state up/down, down/up and allover the place, the state is ONE DIRECTION with React, and it is backed by Facebook bro.

-(me) Oh yeah? Ok what about the 3 years I spent battling with AngularJS?

- Don't bother bro Google is dropping active development and maintenance of AngularJS, it is either you switch or use their beta Angular 2.

-(me) No, thanks I'll go for React.

- Dudeeee look it is Redux, you dispatch an action, and it takes care of updating the state, and you know what, you can combine it with Redux-thunk to avoid race conditions.

-(me) Wait wuuuut? Race conditions in the front? Feels like it is becoming a game engine bro da heeeeck...

- Brooowww, look dat new baby, VueJs :D

-(me) wait wuuuut? Don't tell me Facebook is letting down active development of React.

- No bro it's just another cool front lib it is really dope, and it's only the V(vue) part of the front taking care of reactively updating the DOM.

-(me) ok but what's new?

- Now you can have a DSL addition in HTML cool stuff like 'v-for' to dynamically hide/show a DOM element without writing any code.

-(me) wait, wuuuuut, this reminds me of something I've seen before, the ng-if AngularJS is coming back with another name? I'm loosing my mind broooo.

- Dudee look its Gatsbyjs, generates a cool static site, but you have to write React and graphql for it

-(me) ok.

- Dudeeeee look svelte is killing them all.

-(me) ok.

- Dudeeeee this one is an absolute killer, nextjs everything is server side rendered, you write your React component you describe the data and the server takes care of building the static html and send it to the client.

-(me) wuuuut? But that's what we were doing back in 2009?

- 2019 duuuuudeeeeee ...

-(me) STFU I quit.

(and since then, I'm happy cloud solution architect)


In what way are current websites not usable?


A lot of websites:

- reimplement browser features and do a bad job of it

- are unnecessarily slow

- try to stop being slow by breaking even more stuff

Some examples I’ve come across within the last week:

- Twitter pages randomly failing to load until the next hard refresh, probably because of service workers somehow (which are unnecessary for user experience improvement since HTTP cache exists and I don’t have notifications enabled)

- Twitter’s offscreen tweets not being searchable with browser find because they were optimized out of existence – and I want to do this in the first place because it decided to refresh itself while I was in the middle of reading

- YouTube application state getting out of sync with browser history that it overrides when I navigate Back too fast for it – the URL stops matching what’s playing and the navigation doesn’t take place

- GitHub’s reimplementation of browser navigation reimplementing a loading indicator but not the stop button (I think I’ve also gotten it out of sync before)

- Form state not being restorable across history navigation because the fields didn’t exist when the page loaded

- Reddit’s redesign is so unusably sluggish that I don’t stay on it long enough to run into any other problems


> Reddit’s redesign is so unusably sluggish that I don’t stay on it long enough to run into any other problems

I agree with everything you said, but allow me to reiterate this point. Reddit's redesign is the most hostile thing I've ever seen. It explicitly blocks me from reading the discussion. What's the point, then? It's absolutely unusable. If they ever remove old.reddit.com, I'm not following a Reddit link ever again.


I started reading your list looking for something to argue with but I'm sadly agreeing with all your points which I encountered myself.

I didn't even realize they're reimplementing browser features and that's a really cool observation.

I think it's paramount to understand that this is not a list of bugs to fix, it's a list to convince us that we should consider Server Side Rendering more often and avoid client side navigation, history API with all its possible race conditions...


Some people create poor websites, others create great ones.

I personally love all the Dutch public websites, they have great UI/UX and are fast. For example: https://coronadashboard.government.nl/landelijk/vaccinaties



So true.


Nothing has "failed" on a technical level. I'm sure if a company hired a bunch of highly experienced devs and said "build us the fastest, most user-friendly site ever, everything else is secondary!" you'd get an amazing site.

You get what you optimize for. User experience is not being optimized for. The ability to prey on users via ads, dark patterns, info harvesting, tracking and even malware is. This in turn happened because people discovered that, mostly, there are only two ways to make significant money online: 1. Be the first to deliver a revolutionary service everyone loves in just the right form for it to catch on. 2. Gather a bunch of users via some shiny object and then exploit the hell out of them.

For obvious reasons, #2 caught on, dominated the web, and will continue to do so until various regulations slow it down.


Given that a lot of people have the title “front end web developer” of course there’s going to be backlash. Most people aren’t unbiased enough to hate on the thing they’ve spent years mastering.

Frankly front end web development is a huge mess if you look at it from an unbiased perspective. HTML wasn’t designed for multimedia, JavaScript was designed in a week. Now the entire modern industry of web development are a series of endless hacks built on top of the old framework. React, typescript, css are all skin grafts on top of technology that was designed for something else. Of course the end result is going to be inefficient. The entire web is new tech patching up old tech. It’s like the 737 max, all of modern web development is an MCAS on top of html. It’s not even a single thing. It’s like hundreds and hundreds of MCAS’s with some MCAS’s built to patch up other MCAS’s.

The fix of course is to start from scratch. Given all of our knowledge of Modern web GUIs design a singular native api that can achieve what we need with significantly less complexity and better performance.

Of course some front end engineer with 10 years of experience isn’t going to openly embrace a new api that is not only better but can possibly mastered in a month by any average dev. If you’re primarily a front end developer I have to say that I cannot trust your opinion. I do not believe any person has the impartialness to deride the very thing that defines their career and expertise. Literally front end developers are just experts at navigating a universe of patches and hacks and no such dev wants to admit it.

I’m interested in peoples opinions though, it’s just in a thread like this it’s impossible to get an unbiased opinion.


At a fundamental level the "web" has the following core engineering flaws:

1. it is standard to send json from server to client, a format that takes up more space on the wire, and takes more cpu power to serialize and deserialize than various binary options, all because nobody can be bothered to build tooling around debugging a binary format on the wire and a serialization library that is efficient into the browser

2. We require two separate text markup languages to define a ui (html and css) rather than compiling that down to an efficient binary format

3. We rely on a dynamic language built in 9 days to write client side code, rather than a byte code that any language could target and choose to be dynamic or not as the domain demands (wasm may one day finally change this)

Then rather than a trend of fixing these problems, we instead let these problems spread, as electron brought the html+css+javascript model to desktop apps, nodejs brought javascript to the server so you could use one language for your web app, and on and on.

There is hope though, wasm at the forefront. Today only non garbage collected languages are viable for wasm without adding the overhead of downloading a large runtime, but that is an engineering problem that could be solved.

There are so many trivial inefficiencies that can be trimmed away. But perhaps the biggest obstacle to a nice web experience is intentionally user hostile design. The modal popups, the cookie warnings. These waste more time than parsing and fat downloads for most of us. I'm not sure how to solve that problem. People give you blank stares when you complain about it.


Counterexample: Figma.


Could you elaborate?

I don't like Figma very much, but I think that's because I prefer typing code into text files. So, probably not for reasons pertinent to this discussions. Anyway, I'm interested in why you would consider it good.


It is efficient enough to make you rethink the limitations of JS apps, it is basically glitch-free, and it works remarkably well as a full-featured tool for designing screens collaboratively and/or working with SVG.

That said, Figma sort of exists in an “app-like” vertical when it comes to web frontends. The other big vertical, and the one probably being alluded to in this tweet, is “magazine-like”. Those frontends have a much different goal, one that prioritizes responsive design, accessibility, progressive enhancement, and so forth. NY Times has been among the best in that vertical for a long time.


Modern front end development cannot fail, it can only be failed.


I run no script, if i cant get it working allowing no or very minimal js, i close the tab.

Life is too short for shitty slow, bloated websites and buggy apps. Just dont use this trash, if the devs aren't going to up their game and prioritize user experience over profits then fuck them.

The only way web developers are going to improve is if they start losing users and money. Until then they'll keep crapping out garbage.


I'd argue that life is too short to use no-script. And why point the finger directly at web developers. Product managers throw a requirement over the fence and web developers figure out how to satisfy their wants, while squeezed into sometimes unrealistic time constraints.


Media websites are a bad way to judge the state of web app development (which sucks for its own reasons). If you manage to hire good developers on a media site, they will try to optimize how ads and the endless list of third party libraries are pulled in.

If you didn’t get some dope developers then they will just half ass dump all of the stuff needed onto the page (which is why most of the the web sucks).

What’s worse is that these sites sit under a publishing/media business, so they aren’t tech companies (the devs are second class citizens).

Everyday I run into at least one site where they couldn’t figure out a sticky header, or a decent ‘accept-cookies’ box (where it takes up half the screen, and when you hit X (if you find it), it stutters away, as if css transitions is a mind blowing concept).

Then after that, the page janks down as a whole ad gets rendered. Zero effort is being put in at some of these places. I would consider lazy loading content in the visible viewport the moon for these sites.

This is a necessary discussion because the web is becoming a tech slum with these poorly designed websites. To bring it back to the larger web development (both app and website), it can be difficult to suggest to a team a very simple question - ‘Do you not see that the end result looks and feels like shit?’. It’s too abrasive, but all of us need to start asking these basic question for quality’s sake.


Media are on shoestring budgets, unlikely to sustain good developers. But it's not only media, many other services neglect the user-centric optimization. That case is probably due to proliferation of various kinds of outsourcing/seo consultants whose interests don't overlap much with users'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: