Hacker News new | past | comments | ask | show | jobs | submit login
SPAs Were a Mistake (gomakethings.com)
692 points by andrei_says_ on March 2, 2022 | hide | past | favorite | 610 comments



It's been so frustrating watch this play out over the past decade.

I keep seeing projects that could have been written as a traditional multi-page application pick an SPA architecture instead, with the result that they take 2-5 times longer to build and produce an end-result that's far slower to load and much more prone to bugs.

Inevitably none of these projects end up taking advantage of the supposed benefits of SPAs: there are no snazzy animations between states, and the "interactivity" mainly consists of form submissions that don't trigger a full page - which could have been done for a fraction of the cost (in development time and performance) using a 2009-era jQuery plugin!

And most of them don't spend the time to implement HTML5 history properly, so they break the URLs - which means you can't bookmark or deep link into them and they break the back/forward buttons.

I started out thinking "surely there are benefits to this approach that I've not understood yet - there's no way the entire industry would swing in this direction if it didn't have good reasons to do so".

I've run out of patience now. Not only do we not seem to be learning from our mistakes, but we've now trained up an entire new generation of web developers who don't even know HOW to build interactive web products without going the SPA route!

My recommendation remains the same: default to not writing an SPA, unless your project has specific, well understood requirements (e.g. you're building Figma) that make the SPA route a better fit.

Don't default to building an SPA.


> And most of them don't spend the time to implement HTML5 history properly, so they break the URLs - which means you can't bookmark or deep link into them and they break the back/forward buttons.

The majority of routers for React, and other SPA frameworks, do this out of the box. This has been a solved problem for half a decade at least. A website has to go out of its way to mess this up.

That aside,

SPAs are great for a number of reasons:

1. You aren't mixing state across server and client. Single Source of Truth is a thing for a good reason. If you have a stateful backend, and your front end naturally has the state of whatever the user has input, you now have to work to keep those two in sync.

2. You need a beefier backend. A SPA backed by REST APIs is super easy to scale. nginx can serve up static resources (the JS bundle of the site) LOLWTF fast, and stateless REST apis are easy peasy to scale up to whatever load you want. Now you just have to worry about backend DB, which you have to worry about with non SPAs anyway.

3. Less languages to deal with. If you are making a modern site you likely have JS on the front end, so with SPA you have JS + HTML. With another backend framework you now have JS+HTML+(Ruby|Python|PHP|C#|...), and that back end code now needs to generate HTML+JS. That is just all around more work.

I agree some sites shouldn't be a SPA, a site that is mostly text content consumption, please no. A blog doesn't need to be a SPA. Many forums, such as HN, don't need to be a SPA.

But if a site is behaving like an actual application, just delivered through a web browser, then SPAs make a ton of sense.


Your arguments seem to be assuming a particularly bad implementation of a traditional backend.

1. A good server-generated-HTML backend will have no more state than a good server-generated-JSON backend. The client state is all stored in the client either way, whether in JS variables, HTML tags, or the URL.

2. A good server-generated-HTML backend doesn't do significantly more work just because its output is in HTML instead of JSON. A bit of extra text generated isn't going to increase your CPU load in any meaningful way.

3. There are only fewer languages to deal with if you aren't in charge of writing backend code. If you're in charge of the backend code, you still have to pick a backend language for your JSON API.

I think you're assuming that the choice is "SPA" or "messy stateful monstrosity". It's perfectly possible to build a RESTful HTML-based API that is as clean and stateless as any JSON API. PHP's been starting each request with a clean slate for decades.


Your arguments seem to be assuming a particularly bad implementation of a traditional backend.

Whenever someone says "this particular architecture is bad!" they're talking about a bad implementation of it.

The point is whether or not you're more likely to succeed at making a good app with an SPA or a multi-page site. For pretty much all brochure-styles websites and many SaaS webapps you're more likely to achieve success (for every common understanding of success) by using a multi-page architecture because they're usually simpler to implement, they work the way browsers expect things to work, and you don't need to implement some hard things yourself. You can make a brilliant SPA website for any purpose, but often people try and fail. Saying "you shouldn't have used an SPA" is shorthand for "You didn't understand or implement an SPA well enough, and now your web thing is failing to serve users as well as it should, and using a multi-page architecture would have avoided the problems your website has now."


> Your arguments seem to be assuming a particularly bad implementation of a traditional backend.

These weekly SPA complaint threads always assume a particularly bad implementation of a SPA as well though.


You always have to fight with incompetency in any large codebase.

Incompetency exists the most in whatever the first thing that coders learn is.

I'm old enough to remember when that was c++, then it was java, php, ruby, jquery, now it's react.

It's always a trade-off. You can build things in the "cheapest" language (whatever the first one currently is) but then you'll inevitably get the cheapest code

That's really what this conversation is about in the long arc of coding

Skills and people are a pyramid. The more competency you demand the harder the people are to find.

We have this tendency to taint the tool by the users.

Incidentally after a language or tool loses "first learned" status it generally slowly regains its prestige.

We don't assume a c++ shop is a bunch of morons any more or that using php means you write nothing but garbage. One day vue/react/whatever will lose its first language status as well and I'll be here reading about something that might not have been invented yet being a trashy bad no good idea

Ultimately the technical merits are mostly cover for a conjecture of economic efficiency. There's a reason why people aren't defending things like applications built with Go/wasm bridges - those people are expensive


The key here is that if we consider equivalent good and robust implementations, equivalent capable teams, same UX, etc of an SPA and a traditional full stack MVC application with a modern ajax tool such as livewire, hotwire, etc the latter takes a fraction of the time and cost to build and the result is far less complex and easier to maintain.

I've worked in both kinds of environments, and unless you're building an offline first app, dogma, or Google maps...SPAs make absolutely no sense from the engineering point of view.


*figma, not dogma. Autocorrect messed up and just noticed.


Multi-page forms without some front end stuff ends up with the very clunky either "rerender previous form pages over and over again, except hidden", or "have some token to track partial form data", or "build up a DB to store a partially complete form".

With some frontend work you can have a multi-page form just work, with the data stored in the client up until final submission, and only sending in partial checks ahead of time. This is qualitatively easier to handle, in my opinion.

It also seems extremely uncontroversial that sending data for a single item is going to require less text generation than sending over that data + the entire page.

These are all gradients, but people make absolute claims that don't hold up in these arguments.


"It also seems extremely uncontroversial that sending data for a single item is going to require less text generation than sending over that data + the entire page."

And yet... so many SPAs feel so much slower than MPAs. They suck down MBs of JavaScript, constantly poll for more JSON and consume crazy amounts of CPU any time they need to update the page.

If you're on an expensive laptop you may not notice, but most of the world sees the internet through a cheap, CPU and bandwidth constrained Android phone. And SPAs on those are just miserable.


I'm on expensive machine and notice a lot, too!

I also use a lot of "classic" websites where they fall over because of bad server-side state.

An example, a train reservation site, where I choose dates + a destination. The next page, it shows me some results. I decide to change the date. I hit the back button, and it falls over, cuz the state management on the server is messed up.

This happens a lot for me (this is mainly on Japanese websites), and it's extremely frustrating.

I don't like a lot of SPAs, I also don't like a lot of "classic" apps, but I do feel like SPA-y stuff at least demands less of the developers so the failure cases are a bit less frustrating for me. In theory.

And to the connections, the terrible websites with many megs of JS were likely terrible websites with many megs of HTML and huge uncompressed images before that... I don't want to minimize it (thank god for React, but old Angular bundles were the worst), just think comparing like-for-like is important.

EDIT: thinking about it more though, it's definitely _easier_ to send giant bundles on certain websites.

Given how many times this discussion happens on HN, I feel like instead of the hypotheticals, people should make a list of actual websites in both domains so that comparisons and proper critiques could be made...


> "I also use a lot of "classic" websites where they fall over because of bad server-side state.

An example, a train reservation site, where I choose dates + a destination. The next page, it shows me some results. I decide to change the date. I hit the back button, and it falls over, cuz the state management on the server is messed up."

any idea to not fall into a pit making an website/app/whatever?


Even if that is the case, is that your demographic?

Are those the people calling for an uber?

Are those people ordering off of doordash?

Furthermore, code splitting / lazy loading is available on many SPA frameworks so you don't have to download all the Javascript in one go.


I see your point, but managing state is not free on the client side either. Frontend frameworks usually come with some built in state management, but once it starts to be more complicated we often need to find a 3rd party library to manage it.

I agree that there are many cases where managing the state in the frontend is the preferred solution. Multi-page forms add complexity for both frontend and backend. Sometimes frontend is less complex, and other times the backend is simpler.

I'll add some comments on your statements regarding backend. I'm not saying it is a better solution than managing it on the frontend for all cases. My point is that although it adds complexity on the backend, it does not necessarily mean that managing state on the frontend is simpler. That depends on the use case, but I think a lot of developers "default" to handling state on the frontend that adds much more complexity than a simple backend solution.

> "rerender previous form pages over and over again, except hidden"

In that case, you would only render hidden <input> fields with the values, and not the complete form. The code for receiving "step x" of the flow, would simply read the parameters from the request and include the values in the html code.

> "have some token to track partial form data"

Using <input> fields for this is much simpler. The final page would just read all variables as if they where posted from the same form, without requiring to generate/parse any tokens

> "build up a DB to store a partially complete form"

Most frameworks have a built in "session" that abstracts this away. It may be stored in a database, file, memory etc. If you require distributed sessions, the framework often handles this transparently by just configuring the session manager to use something like Redis to store the data.


> "rerender previous form pages over and over again, except hidden"

There are modern ways to do this, see Unpoly, HTMX, livewire, hotwire, etc. You're comparing with an outdated view of what an MVC application looks like. It's like to complain about SPAs because of backbone.js

> "have some token to track partial form data"

This is called "sessions", the token you refer to can be a cookie, which is done by default for free on any MVC framework. Doing this in any other way lead to either losing authentication on page reload or security vulnerabilities (storing a token in localstorage, etc).

> "build up a DB to store a partially complete form"

Again, you can store partial data in a session, for free. As it comes by default with any MVC framework.

One of the key points here is that with any of the popular MVC frameworks you don't need to rebuild the wheel and the car from scratch as with SPA frameworks, most of these things come for free, specially anything related to forms. This is something we're not used to have in the SPA world and everyone has a different way to deal with it.

> Multi-page forms without some front end stuff

Nobody says there shouldn't be any frontend stuff, you still need it of course. If fields are static between steps you can just render every step and toggle between different set of fields using something like Alpine, no need to reload from the server. If fields are dynamic and need some kind of database lookup between steps, Unpoly or livewire/hotwire make this trivial.

Please, let's stop comparing Next.js/React top modern SPAs to 20 year's ago struts MVC, it has not been like that since many years ago already.


I've built a multi-page form in an SSR app with a tiny dash of JavaScript. The form's children are divs. The Next button hides the current div and shows the next one. The final Submit button is just a regular submit.

If you get into more than 3 pages, this isn't a great approach for various reasons, but you don't need to reach for a framework the instant you have a multi-page form.


The alternative to SPAs is not 90s pages reloads.

Nowadays you have livewire, hotwire, unpoly, htmx and several other modern solutions.


The alternative to SPAs is not 90s pages reloads.

Nowadays you have livewire, hotwire, unpoly, htmx and several other modern solutions.


Most SaaS developers aren't serving APIs to their clients, they are serving websites and web applications.

Applications have state. In order to render and deliver the application, the server needs to have some concept of what that state is.


> a bit of extra text

Doesn't really sound like an application, but a website.

In a web app it can happen that you use it for an hour without the backend doing a single thing.

> There are only fewer languages to deal with if you aren't in charge of writing backend code.

You don't have to deal with them at the same time


> Doesn't really sound like an application, but a website.

"text" here is as in "text/html", not as in "English-language copy".

> In a web app it can happen that you use it for an hour without the backend doing a single thing.

I would submit that this is an extremely rare case. The most involved web apps I interact with (say, Figma) are constantly syncing their state with the server. The simplest (say, TurboTax) save state as I move on to the next screen.

If you do have a case where you can pull that off, then by all means use an SPA. But it's weird to say that something isn't a web app unless it can go long periods of time without server interaction.


I haven't said that something isn't an app, unless it can go long periods of time without server interactions.

Generally, server interactions are arbitrary with SPAs, while SSR will happen all the time.

Going only a few minutes without server interactions in an interactive app is a big leap compared to SSR.

PS: I not only can pull that off, but I did. It's not too hard to think of apps like that. Consider e.g. Vscode


If the app is VSCode or anything like it, that's a great candidate for an SPA and I agree wholeheartedly that you made the right choice.


> Doesn't really sound like an application, but a website.

That's probably the key thing - most companies building SPA's don't really need an application but a website. There are many interesting products that need to be applications because of the functionality they need, but for every one such product there's at least a dozen that does not.


> Doesn't really sound like an application, but a website.

Yeah, well, from the name of it, doesn't that sound like what the Web is for?

I mean, we already have systems to run applications on; they're called operating systems.

Not only SPAs but "web applications" as a whole are a mistake IMO.


There is no black or white here. 98% of every site is in between those things and you could consider them one way or the other. Is reddit a website or an application? is a backoffice dashboar a website or an application?.

The problem with SPAs is not the technology or the architecture itself. The problem is everyone thinks, by your own definitions, they are building an "app" by default.

I've already worked for several teams which struggle to get almost anything done and everything takes ages to ship because of the fanaticism of using React for everything. God, some didn't even know you could submit a form without building a json API endpoint.


> isn't going to increase your CPU load in any meaningful way.

yes but it would be soul crushing for a developer


Why?


I’m not experienced with best in class tooling (probably with worst) but seems very quick, easy and native to render html using js.


> seems very quick, easy and native to render html using js.

Seems even quicker, easier and "more native" to render HTML as, you know... HTML.


> 1. You aren't mixing state across server and client. Single Source of Truth is a thing for a good reason. If you have a stateful backend, and your front end naturally has the state of whatever the user has input, you now have to work to keep those two in sync.

If this were true, you wouldn't need a REST API. I don't understand what you're trying to say here. When you make a REST call to get data, you instantly have two different sets of state: the client and the server. It's no different from SSR, it's just transmitted in a different data format (json vs html).

> 2. You need a beefier backend. A SPA backed by REST APIs is super easy to scale. nginx can serve up static resources (the JS bundle of the site) LOLWTF fast, and stateless REST apis are easy peasy to scale up to whatever load you want. Now you just have to worry about backend DB, which you have to worry about with non SPAs anyway.

You do the exact same thing with SSR. Stateless shared nothing app tier instances. Been doing it for 15 years now.

> 3. Less languages to deal with. If you are making a modern site you likely have JS on the front end, so with SPA you have JS + HTML. With another backend framework you now have JS+HTML+(Ruby|Python|PHP|C#|...), and that back end code now needs to generate HTML+JS. That is just all around more work.

You can use JS on both the frontend and backend. Or ClojureScript. Or TypeScript. I'm sure there's others. But yes, for many languages this is a potential negative of SSR.


> If this were true, you wouldn't need a REST API. I don't understand what you're trying to say here. When you make a REST call to get data, you instantly have two different sets of state: the client and the server. It's no different from SSR, it's just transmitted in a different data format (json vs html).

SSR means you don't have a clear representation of the client-side state (as distinct from the presentation) - by definition you render on the server and only serve the view layer to the client, whereas your data model only lives on the server. There will naturally be state in the client (e.g. form inputs), but you don't have a good representation of that in your model.

> You do the exact same thing with SSR. Stateless shared nothing app tier instances. Been doing it for 15 years now.

OK so where does the UI state live - not the long-term persistent entities, but things like unvalidated form input, which tab is enabled, which step of an in-progress wizard the user is on? Either you manage that on the client (at which point you're halfway to an SPA, and getting the worst of both worlds), or you manage it in the application layer on the server (in which case you have all the scaling issues), or you make every UI change go all the way into the data layer which has even bigger performance issues.


That state lives on the context of the page. That's the point of having a URL/page lifecycle that reloads the context.

If you need to persist past a reload then a few lines can save to localstorage. Anything more requires server-side calls anyway.

This magical state that can only be managed on the client-side with a heavy SPA is a myth for 99.9% of sites.


> If you need to persist past a reload then a few lines can save to localstorage.

Sure, and pretty soon you've got a dozen random little copies of bits and pieces of your state, all out of sync with each other.

> Anything more requires server-side calls anyway.

The issue isn't whether you need server-side calls (ultimately every webapp needs server-side calls, otherwise why would it be a webapp at all?), the issue is whether your framework can manage client-side state between those server-side calls. In theory you could create a server-side-rendering framework that was good at this. In practice, none of the big names has succeeded, and certainly not without significant costs. (I'd argue that Wicket does this well to a certain extent, but it comes at the cost of both relatively heavy server-side session state and significantly more network roundtrips than SPA style).

> This magical state that can only be managed on the client-side with a heavy SPA is a myth for 99.9% of sites.

On the contrary, 99.9% of sites have or could benefit from having some amount of client-side state. Any time you have a stateful UI, there's a usability benefit from persisting that. Any time you have so much as a form field - like the text box I'm typing in right now - there's a usability benefit from having that as managed state (I've lost comments because I closed the wrong tab or accidentally pasted over with something else), and in cases like this there would actually be a privacy concern with doing that on the server side.

In theory you don't need an SPA framework to do that. But in practice SPA frameworks are the only ones that do it well.


There's usability benefit in reloading to reset state and it's the common expectation when browsing. Regardless, if you do decide to add it then its a few lines of code to persist all forms on the page.

SPAs don't automatically provide any state management, and often the complexity requires even more work to manage forms. This is the complaint here, taking a simple requirement and forcing a webapp into it. It's completely unnecessary.


Many SPA frameworks do provide state management. If you start from the idea that you want structured client-side state management for your webpages, you'll probably land on an "SPA framework". And if you're using such a framework, while I'll always advocate things like proper URLs and history (which the framework should handle for you), forcing a page reload when it's not needed seems pretty wasteful.


Regarding a rest call (#1) being out of sync ... usually the "state" is in the database... if you're using SSR, it's still a separate context of state than what may be in the database a fraction of a second later.. and if you wish to keep that in sync, you're still going to need JS, or some other goofy hacks to do so.


> You aren't mixing state across server and client. Single Source of Truth is a thing for a good reason. If you have a stateful backend, and your front end naturally has the state of whatever the user has input, you now have to work to keep those two in sync.

This is a bit weird to me, in that I'd say that cuts in the opposite direction. State can exist in at least three locations for most applications: db, app server, and client. Keeping state consistent across all of them can be difficult in the best of times, but thin clients by their very nature carry less state, lessening the burden. Sometimes client state is necessary, for richer user interactions, but for all but the most cosmetic of purposes you're going to have to replicate that state on the backend anyway, to enforce business and security requirements.


> but for all but the most cosmetic of purposes you're going to have to replicate that state on the backend anyway, to enforce business and security requirements.

This really just comes down to what you're writing, how app-like your web app is. It's too easy to have one's own experience focused in a certain area and estimate the remaining majority as relatively similar. (For most of what I personally work on, the DB portion is mostly a simple straightforward serialization of what the user has built through the application; whereas client-side state has so many aspects to it I couldn't give a brief characterization—the whole app is basically client-side.)

From what I can tell most of the disagreement about SPAs results from devs who are building things that aren't app-like railing against their futility vs devs who are, who become perplexed by the vitriol when they have immediate experience with their architectural benefits.


> From what I can tell most of the disagreement about SPAs results from devs who are building things that aren't app-like railing against their futility vs devs who are, who become perplexed by the vitriol when they have immediate experience with their architectural benefits.

The SPA criticics from the article and this thread have repeatedly said that their issue is not with building things that need the benefits an SPA architecture brings. The criticism is that the majority of SPAs are harmed by that architecture because it is the industry default and being used when it isn't appropriate.


I get that that's the biggest problem, but there are plenty of people in the thread talking about how they're a bad idea in general (including the comment I was replying to)—which incidentally lines up with the (apparently) clickbait title "SPAs were a mistake".


For the record, I was making a general point about one of the tradeoffs with an SPA vs MPA, not making the claim that MPAs are universally better than SPAs for all use cases. I think most reasonable people can agree that there are places where SPAs are called for and places where they're not. It's the ambiguous cases that draw the conflict, and psychologically the anti-SPA people focus on the really shitty ones and the pro-SPA people focus on the use cases that would be impossible in an MPA.

Also, for what it's worth, I've worked full-time on one of the largest SPA projects in the world (~500 engineers contributing frontend code to it on an average week), so this is not coming from a place of total ignorance.


I excerpted and replied to something specific from your comment which read to me as essentially "even the cases that seem to need it probably don't" which matches both the tenor of (many of) the replies and the title of the article. But if I misread you my apologies.


That’s the thing, they are a bad idea in general because, in general, people ARE building things that don’t benefit from an SPA article. You’re the one extrapolating that they are saying ALL SPAs are bad.


> You’re the one extrapolating that they are saying ALL SPAs are bad

This was not my extrapolation. But if you skim over a comment you're likely to perceive it as falling into one broad camp or another whether that's the case or not.


This is missing the real reason that people write SPAs, which is that React solved web components, which are hugely beneficial for almost 100% of web sites, and thus became the standard for building web sites, and with React it's easier to make an "SPA" than to make a "traditional" site and users don't know or care either way.


Users care they just don’t know why many modern websites are bad websites. Every website is now an app whether that actually makes it better UX or not.


Web components actually aren't that good. Most seasoned and experienced developers I know hugely prefer either ASP.NET webforms or one of the many Java MVC implementations. We all know and use React daily, but I've literally seen the same application built faster with better maintainability and scalability once it was moved away from a SPA.


Also the React programming model is brilliant.

Anyway the point is that people write SPA's because of React, not the other way around.


Wicket has offered a beautiful component approach for over a decade now. Having seen it I can't stand page-oriented MVC frameworks (indeed it's good enough that it convinced me that OO actually has some merit in some cases).


I used Wicket quite extensively about 10 years ago so my comments may not be true anymore. I began using Wicket as it was so much better than Struts and JSF. However, the development of new custom widgets in Wicket was so much more convoluted than implementing the same widget in Backbone.js. And it was hard to inject new functionality into and existing page. I eventually refactored all my UI code into jquery+ backbone.js ( this was before React ) and that code is in production and still working. And the new developers maintaining it don't see any reason to refactor that into React or Vue.


> However, the development of new custom widgets in Wicket was so much more convoluted than implementing the same widget in Backbone.js.

Hmm, I found developing custom widgets was a joy, though the key was to keep them very small and compositional - e.g. if you want a user details widget with an address entry, it's probably best to make that address widget its own smaller widget that the user details widget just uses - and aligned with your model hierarchy. Often you end up with a parallel hierarchy where e.g. you have a user model that contains an address and phone number, so you've got a user display widget whose model is that user object and in that there's an address display widget whose model is the address field of that user object (and similarly for your edit widgets).

I tried to look up examples of what doing the same thing in Backbone.js looks like, but the search results don't seem to be about making custom widgets as I understand it. To be fair I'm struggling to find examples in Wicket as well. But I'd be interested to hear what it does better.

> And it was hard to inject new functionality into and existing page.

Hmm, what kind of functionality do you mean? I will say that again making pages very compositional was key - my team settled on a pattern where most of our pages were just a single top-level panel (so you could always reuse or embed a whole page if you wanted to) and then most panels were made of a handful of smaller panels, similar to the clean code style where you try to make each function only call three or four other functions. And then it was easy to change whatever we needed because the code structure corresponded directly to the logical business/model structure and the inheritance structure corresponded to the visual structure (e.g. we had an abstract class for what an "editing panel" looks like and all our editing panels inherited from that. So if you want to add a new field to the user model, you add it in the user model and the user display panel and user edit panel are right next to that. And if you want to change the visual design of all our editing panels, you change that in the parent component and it will apply to all of them).


Is this the Wicket you're referring to? https://wicket.apache.org/

What's the best intro you know to how it's components work, and the benefits and tradeoffs over other approaches?


That's the one. I learnt it by pair-programming (which is great but not necessarily replicable). From a quick search, their userguide has a fairly light introduction to why: https://nightlies.apache.org/wicket/guide/9.x/single.html#_w... . And the book https://livebook.manning.com/book/wicket-in-action/chapter-2... has a more detailed explanation of the what.


I agree with all your points, but I think it's worth pointing out that those benefits you mentioned are largely for the developers. As a consumer I love a well-written SPA when the problem set calls for it, but most of the SPAs I have to use are garbage. I don't fault the tech for that, although I suspect that a lot of those SPAs were created by "me too" people that just wanted to build a SPA. When React was in the pre 1.0 days, I did that, and several people on my team as well (so I'm not casting any stones here, just trying to state facts).

Last time I bootstrapped a React SPA I don't think cra includes a router ootb.


> but most of the SPAs I have to use are garbage

As an example, look at reddit. I'm still using old.reddit.com because I can't stand their fancy SPA UI. It is so bad to the point, as a user, I enjoy a lot more HN's interface than reddit's one.


2. Hard to believe that is true in the general case.

Typical scenario for SPA is to use some sort of REST API, these API:s are usually designed for general usage, not specific usage, i.e. designed to be reused between components and views thus they basically return everything of a specific model regardless if data is needed or not.

Therefore the controller queries the database with the equivalent of SELECT * on a table (or perhaps multiple tables with joins) and then exposes every field.

And in many cases one request is not enough because the common generic design of REST APIs, thus a few request more are fired that results in multiple SELECT * against the database, and eventually the equivalent of SQL JOIN is performed in JavaScript.

Already SPA solution has an increased cost by asking for data that is currently not needed, not only in the traffic between the database and the backend but also in the traffic between the backend and the frontend.

And because we want to be good REST citizens we sprinkle the JSON payload with timestamps, resource urls and pagination information and what not and in majority of cases never to be used.

Comparing that to SSR where you can fetch what you need from the database with custom SQL query (I hope you do, otherwise the SQL leprechaun will make a visit).

Just imaging how much data there is on the web that is requested and then just discarded, not even looked at.

It is possible to design custom REST endpoints for each component, but then of course what is the point of a SPA then? If you are already writing a custom REST endpoint just return HTML instead of JSON and then swap in the new and swap out the old for your component (one-liner), the end result is the same.

SQL -> (Array of) object(s) -> JSON -> Javascript (Array of) object(s) -> HTML

can therefore be shortened to

SQL -> (Array of) object(s) -> HTML

3. That doesn't make any sense, number of languages are still the same regardless.


GraphQL is such a quality of life upgrade coming from this environment, especially at the scale where your frontend teams are potentially larger and shipping more than the teams closer to the SQL can provide.


GraphQL is a consequence of the SPA design, a bad design leads to a worse fix.

The drawback is that the frontend now has its own schema, often it starts as a naive direct mapping of the real schema.

Thus any changes in the real schema also need to change the frontend schema and every use of it, or the mapping to the frontend schema needs to change.

Eventually these two schemas will diverge because it is not feasible that every schema change results in frontend change. Especially if the idea is to have two different teams working from either side, then the backend team can’t wait for the frontend team therefore the schema mapping will change.

And the thing is that the frontend shouldn’t be aware of how the backend schema is constructed, if the User model is separated into three different tables, because of some technical reason, that should not change how the frontend operates. The frontend understanding of what a User is shouldn’t be the same as the backends.

Therefore ideally frontend schema and the backend schema will always differ. They don’t view the world the same way.

However what you now have is a slow mapper between the frontend schema and backend schema.

The point of relationship databases is that you can view your data from different perspective by doing different SQL queries. That is already built in. But now we have invented yet another layer on top of SQL, usually in combination with the already monstrosity called ORM.


What a tortured usage of GraphQL. Schema files are automatically generated by the backend, and components pull data that they need, and know more. If you find yourself changing schemas constantly, then you’re not defining them in a scalable manner. You’ve basically misused the tech, and blamed it on the tech instead of your misuse.


That is even more horrible what I thought. Automatically generate schemas 1:1 and then expose it. Let me guess tons of information leakage, ddos attacks and queries not hitting indexes. This is absolutely the worst idea I come across in web development. Horror.


GraphQL is just another artificial solution to a problem created by SPAs themselves. Same as SSR, hydration, server components, client side routers, dynamic bundles loading, dynamic translations loading, etc, etc, etc. A whole industry of workarounds for a broken idea. Now 10 years after SPAs became popular we starting to approach a point were we almost have what we already had.


What I really don't get is why we don't just expose SQL directly at this point. Is it just security? Database servers have fairly extensive authentication and authorization models.


Authorization & access restrictions. Yes, you can go quite far with table/row/column permissions, but a lot of business logic cannot be modeled using just those (i.e. "user cannot place orders if total outstanding invoice payments surpass value $X").


The combination of DB permissions, DB constraints, and simple (SQL, not procedural language) triggers gets you a lot, including the ability to enforce rules like the one you mention.


Yes, you can enforce a lot through SQL triggers/stored procedures etc. But you often end up abusing your DB/SQL as a business logic layer, where your business logic is encoded in a huge set of row/column permission and custom SQL triggers . This tightly couples your database into your whole business application stack.

Especially in Oracle PL/SQL, I've seen this often abused to an extend where no one ever understood the whole business logic anymore (as logic was spread out in frontend, middle-layer services, and DB mumbojumo), and the database became a fragile core-piece (with a significant vendor login) and hindered all sort of future development.

Seriously, your business logic should be modelled in code, ideally in some sort of service layer (which does not necessarily mean microservices!).


> But you often end up abusing your DB/SQL as a business logic layer, where your business logic is encoded in a huge set of row/column permission and custom SQL triggers.

That's not “abuse”. Admittedly, it's no longer an essential best practice for most systems, the way it used to be viewed, because it's more common to have a single application which fully owns the database and not to (at least in idealized theory, though very some ops staff still end up with direct access to the prod DB) allow access by other means, so in theory it doesn't tend to be necessary to avoid either circumvention of rules or (inevitably inconsistent, as well as expensive to maintain) duplication of logic.


Then why aren't we working on improving the databases to allow for such complex rules, and instead wrap it in another layer (often multiple) to do all this stuff there?


because a database should not hold your business logic. It should hold your data, and that it can do well. See also my other post on parent for more reasoning.


I beg to differ in many interesting cases, we put at least some parts of the business logic into the database. The table design is a direct consequence of the business logic, the same is true for constraints and triggers.


Why shouldn't it? As noted above, database servers already have most of this (security) logic in them - likely tested much better than whatever you can write on top of the database yourself. And given how many apps are basically just CRUD, why reinvent the wheel every time?


Mostly because the tooling is so bad. Or do you have unit tests, lint, and easy version control for your stored procedures? Are they written in a way that matches at all what rest of your programming is? Can you import a randomly picked utility library?


The next obvious question is, why is tooling so bad then? And would it have been so bad if we invested as much into RDBMS as we did into Node.js web-frameworks-of-the-day.


Writing the next modern elegant artisanal javascript webshit is a lot easier than making a sandboxed programming environment that integrates tightly with a production quality database engine and has a good story for testing, deployment, debugging, etc.


Even if you solve the security issue, a query can easily bring down the server if it has a complex join query.

This could be solved by only exposing stored procedures, but that just moves the code to the database server instead of the REST service with the same problems as before.


You can also use a VIEW.

How does GraphQL make sure to respect table indexes? If not you get a super slow query.


You can still get performance issues with a view if you "select *" on a large amount of data, or join with other views. By exposing the SQL to a web page, you also open up for DDoS attacks more easily, as you can write complex SQL queries

You can get the same problems with GraphQL or stored procedures too of course, if the queries are not optimized correctly


So what's the solution to this?


Performance quotas.


Because a human is writing the resolvers pulling the data from the database. Set whatever index you want to use.


Just give the end user an account to phpMyAdmin. Done. No complex frontend framework required.


You wouldn't belive how often companies use excel with external datasources like this. Excel basically is the common-ui for a lot of people.

And most of the projects I worked on in my professional career as a webdev, were replacing such workflow with a proper web application, because excel does not scale and eventually people fuckup their data.


> The majority of routers for React, and other SPA frameworks, do this out of the box. This has been a solved problem for half a decade at least. A website has to go out of its way to mess this up.

They might not mess up history when using a standard routing library, but I've seen plenty of devs forget to add unique titles to different pages which is frustrating for a user with multiple tabs going.

On SO the accepted answer for react-router looks like "create a custom Page component with title as a prop"[0]. At work I just ask folks to use react-helmet.

[0] https://stackoverflow.com/questions/52447828/is-there-a-way-...


> . You aren't mixing state across server and client. Single Source of Truth is a thing for a good reason. If you have a stateful backend, and your front end naturally has the state of whatever the user has input, you now have to work to keep those two in sync.

Why would I want to keep any state on the client? What in the history of the web (the whole idea being its someone elses computer) would make it a good idea to take away the one major selling point of the web? That no matter what, someone else has the state I need, and I never have to worry about losing that if something happens to my connection. It either went through or it didn't.


> The majority of routers for React, and other SPA frameworks, do this out of the box. This has been a solved problem for half a decade at least. A website has to go out of its way to mess this up.

99% of SPAs break history related features in some way.


97% of statistics cited on HN are made up on the spot.


99% of people don't FRICKING care.

Most of the users I have to deal with still have problems with double clicking.

They don't understand basic browser features.

They don't understand that they can set preferred language in their browser.

History related features are not an argument in any way.


They do care. It also ensures end users are endlessly frustrated by inconsistent UI / UX handling.


They do care if the experience doesn't work right, even if they don't understand why.


Why would people not care that browsing URLs work well?


Because they don't care about computer stuff. They just have to do computer stuff because they are forced to.


Shit you gotta do not because you want to but because you have to, you care even more that it works without hassle.

This is so obvious that not seeing it is weird, and claiming not to see it is weirdly defensive.


They do care that it works. That's the point being discussed.


"A website has to go out of its way to mess this up."

I know it's supposed to be easier, but I keep seeing teams mess this up.


I can say I've seen a lot of state management issues with SPAs... more with Angular than React, and almost none when using React+Redux well.

I think a part of this is that a lot of developers simply don't desire, want to, get to or otherwise take the time to understand the framework they are using... It has been true forever... I can't tell you how many times I've seen stuff copy/pasted from StackOverflow, by devs that don't understand what they're doing, or they add jQuery to a React application, and have goofy interactions.

The lack of understanding will always be a thing, you have to learn, most learn by doing, and when starting out, you don't know that what you are doing isn't good, but it kind-of works.


React+redux well is a huge if.

Last time I used redux, about 4 years ago, every tutorial on it demonstrated a completely different way of using it.

I spent a week piping a couple dozen form inputs through redux.

Throw typescript in there and life got more complex.

Maybe it sucks less now. But I've seen plenty of websites where every key press causes crap tons of state to get copied around because "lol const only". I've seen sites where typing takes a second per character due to mis use of redux, and the problem with redux is that it is easier to misuse than to use properly.


The biggest issue I've seen with things like that, is certain actions with form validation can have unexpected surprises on keypress... so depending on how you're doing form validation, that is usually what will throw off the timing and things drop to a crawl.

Often, if you have a form action button, separate from your validation, best to update state as part of on-change or isolate form state until the action button itself is pressed to push to the redux state.

But I do understand the sentiment... I've run apps, and even forms with some relatively complex and large state via redux without much issue. The biggest hurdle is often getting everyone working on something to understand how redux works, and how the difference comparison works for state changes. Also, dealing with when/where an action should be created/dispatched, how to use the thunks for async handlers, etc.


Tried ngrx?


Yeah... it's decent, but there are many, many other reasons why I'll never touch an Angular app again.


I just don't know how it could be made any easier to not mess up. You really truly do need to go out of your way to mess it up, or be entirely unfamiliar with the JavaScript routing framework or library you're using.


They do, but it's usually the anti-SPA people who mess it up, by refusing to work with the grain of their tools. It's not quite the same thing as "strategic incompetence" but it feels related.


Client-side routing for page-oriented stuff is certainly not a solved problem: the basics, sure, but not actually doing it properly. There are some parts of the experience that it’s not possible to do perfectly because the web doesn’t expose the necessary primitives, and exceptionally few things go beyond the basics of just clobbering and resetting scroll position on back/forward. To do it properly, you need to restore all transient UI state (form field contents/state, scroll positions, focus, selection, media playback position; zoom level, probably not implementable; and there may be more, though I don’t include things like <details open> as transient state since that’s put into the DOM) on back/forwards, and I don’t know if I’ve seen anything actually do that. Then there’s the matter of helping accessibility tech to realise a page change has occurred, and I’m not sure of the state of the art on that, but last time I looked (some years ago) I think it was bogged down in unreliable heuristic land rather than actually being solved.


1) JS history handling is fragile. A single error can break navigation completely. There's no built-in loading indicator so sites are left with no feedback or have bloated progress bars. And nothing automatically solves for deep links if the app doesn't use routes for different views or relies on other events instead of hyperlinks.

2) Servers are very fast and assembling HTML is trivial. Browsers are optimized for downloading, parsing and rendering HTML as it streams in. Using JS to write HTML after making multiple network calls is objectively slower than a single network request that assembles everything on the server close to the datastore with minimal latency.

3) Every other language is faster and more capable on the server than JS, and all major web frameworks have modern component-based UI templating. Interactions with roundtrips are just fine, and some light JS can handle most other scenarios.

> "an actual application"

That's the only reason to use a SPA, not what you mentioned.


All of your benefits seem to come from using only rest APIs to drive to site. That alone can be done with any site, but SPA usually implies more.


> SPAs are great for a number of reasons:

> [...]

> 2. You need a beefier backend.

I don’t work on front-end and am trying to learn from this thread, so I may misunderstand, but that doesn’t look like an advantage to me. Doesn’t “beefier backend” imply “higher costs”?


Hi Simon, why'd you pull me in ;)

We recently went a rewrite of our frontend for https://www.crunchybridge.com from SPA to more "basic" request response app and couldn't be happier. Previously was SPA with React and we rebuilt from scratch with request/response using Node. In places we still leverage react components for re-usable frontend bits, but no more SPA and state management.

As you've mentioned in some of your other threads on this, the state management and sync between the API team and the front end team just caused velocity to slow down. It took longer to ship even the most basic things. While we want a clean and polished experience, the SPA approach didn't really accomplish any of that for us.

The rewrite was under 8 weeks of an app that had been built up over a couple years and we quickly recouped all that time in our new found velocity.


What server-side framework did you use for the request/response rewrite?


It was all note on the request/response as a basic express app.


My knowledge is limited on the front end. May I know which Node framework do you use? Is it NextJS? If not, what do you think about using NextJS because I really consider it a better approach and want to use it at new projects.


We were actually less framework than more. Basic express app. We explored Next and almost went that way.


Totally agree.

We built a product in ~5 months with real-time collaboration, extensive interactivity, Oauth, Stripe and Gmail integrations with a standard Ruby on Rails stack.

It's rock-solid, performant, dead-simple and extremely productive to work with.

Why're we throwing away years of learning to build unstable, complex and inaccessible applications?


>Why're we throwing away years of learning to build unstable, complex and inaccessible applications?

1. Smart people seek out difficult problems.

2. Difficult problems drive the creation of complex, niche tools, that bear cultural associations with the smart people who made and use them.

3. People who want to be smart seek out complex, niche tools.


I think you're somewhat right, but it's not the whole story. There's another pipeline that goes something like:

1. Technical software problems are more fun than difficult product problems

2. Programmers would rather solve fun problems

3. Programmers end up creating technically elaborate machines to solve simple (buy annoying) product problems


As an ex-member of a team who used react, redux, typescript, observables, epics, thunks, custom hoome-grown validation libraries, websockets and elixir deployed in two different microservices to build a... signup wizard... I can confirm this.

I proposed to build it in Rails (which we already had, but was the "old monolith we're migrating away from") and I almost get crucified.


That's a story I'm familiar with, but I am not actually aware of any (major, commonly used) tools that were created out of boredom. I only see instances of people using existing tools when they are not necessary out of boredom.


I think it might be simpler?

The more complex products are the only ones that typically have any documentation or up to date learning resources.

You want to learn how to build a thing and this is the only thing that really exists, is up to date, and works.

It may not be the right tool, but for someone new it's impossible to tell what the right tool is and people online are stereotypically obtuse and about anything tool related.


lmao yeah pretty much. I'm moving from a low code shop to Node/Vue because I can't keep people. They all want to pad their resume, so I'm going to build at 2x the cost just so I can keep the projects going.


The level of Node/Vue isn't padding your resume in this industry, it's having a resume. Padding your resume would be... I don't know, Elm?


Smart and YOUNG, seek and try to conquer complexity, pushing open The Gate of Truth. Only to be completely consumed and burned by it.

There are abundance of smart people. Wisdom, has and possibly always will be in short supply.


> Smart and YOUNG

I'm 21 :-)


> > Smart and YOUNG

> I'm 21 :-)

No problem, you'll be young in a few years.


I think this is a midwit[0] take.

Low: simple good

Mid: optimized good

High: simple good

0: https://knowyourmeme.com/memes/iq-bell-curve-midwit


Same experience here, in our case with Laravel. The project started as a Next.js SPA and after we needed to add authentication, translations and background jobs things became so crazy and so "custom" that we ditched it and in almost 2 weeks had everything built in a much more robust way with Laravel and Livewire + Alpine.


Because the majority of developers will gravitate towards tools that will give them the best employment opportunity, not necessarily the best tools for the job.

TL;DR Resume Driven Development


What approach did you take with real-time collaboration and interactivity? Is that part still rendered client-side?


One could argue rails is just doing a decent job of hiding a monstrous amount of unnecessary complexity from you for basic CRUD stuff. It’s good at this… until it isn’t. In the the whole ORM abstraction (not just in rails) is questionable.

The way most of us would handle authorization in something like rails is a leaky abstraction, especially when we’re usually backing onto postgresql which has very mature roles and permissions.


I always thought of the benefits of SPAs more as a separation-of-concerns thing. You can pretty effectively build a functional front-end web application and mock a set of back-end REST apis, while another team builds out a the back-end. There are absolutely tradeoffs, and being a good software engineer is about understanding where and when those tradeoffs apply.


That's definitely true at the organizational level, and it's an argument with some merits.

In practice though, I've seen this backfire. You end up with the frontend team blocked because the API they need isn't available yet, and then the backend team gets blocked because they shipped the API but they can't use it to deliver value because the frontend team don't have the capacity to build the interface for it!

My preference is to work on mixed-skll teams that can ship a feature independently of any other team. I really like the way Basecamp describe this in their handbook: https://github.com/basecamp/handbook/blob/master/how-we-work... - "In self-sufficient, independent teams".


that sounds like a mismatch between the architecture and how work is getting planned no? if the backend is in the critical path to delivering the user value of a feature then the backend and frontend engineers need to be developing (and testing) the feature together


They ALWAYS need to be building and developing the feature together or this happens. Decent API design without deep understanding of Client implementation or performance needs is nearly impossible.

They generally should all be in the same team, but that often doesn’t scale.

Not having them in the same team pretty much never works well though either.


Also allowing your mobile app to use the same API as the website.


That's not really unique to SPAs, right?

I don't know much about front-end development but I imagine you can create a front-end that is both not an SPA, and not server-rendered.


It's not about being unique, or what you can/can't do. You certainly can mock a front end with a ssr app, but it gets messy when you are building a rich client app and need to start sharing state back and forth.


You can still do that with SSR solution, the mocking just moves one step down, instead of mocking a JSON request you mock a class or an interface.


Why not eliminate that organizational bottleneck by using a full-stack framework that lets one person do it all? DHH recently described Rails as a one-person framework [1]. I think Phoenix fits that category as well.

[1]: https://world.hey.com/dhh/the-one-person-framework-711e6318


When SPA started picking up steam, I thought it was an amazing development! We had gone from mainframes, to personal computers, and were back to mainframes and using our powerful machines as glorified dumb terminals. This way, we could have UI code running locally, and servers only handling state. Plus, less data to transfer!

Then the frameworks ballooned in size. What previously was seen as wasteful (rendering and sending HTML) started to seem pretty frugal in comparison to the multi megabyte pages. Not to mention that one could always send just page fragments.

Other than specialized apps, I think most single page applications are a mistake. Sure, some may benefit from a nice UI - say, I'm writing a 3D modeler. But most apps there are could just re-render pages. 'Refresh' is not much of a problem in an age where simple REST API calls are returning megabytes of JSON data...


I try to tell myself "don't get caught up in using a fancy frontend framework on this one," as I'm starting a new project, but I keep running into situations where my functionality would just work so much better.

As an example, I was writing a tool the other day to automate some things that have to do with quotes for my 9-to-5. Being able to add inline functionality in Django to select a customer within the quote page, or add / edit a new customer without having to leave that quote felt very 'hackish,' using the same jquery callback method used in Django Admin. My point is, this feels like very basic functionality, but turned into a whole other ordeal using traditional methods.


> Being able to add inline functionality in Django to select a customer within the quote page, or add / edit a new customer without having to leave that quote felt very 'hackish,' using the same jquery callback method used in Django Admin.

Agreed. For form based apps I don't like to fall back to SPAs (bloat, the desire of every dev to reinvent forms in their framework, client and server side validation duplication), and yet working with relational data they are easier.

It's one of those places where a half-way step would be so useful.


That is something I hadn't really taken the time to compartmentalize and articulate, but a js framework that focused on forms only would be wonderful. I'm sure that someone has taken a stab at it. Something like crispy-forms that added the ability to add components for variable data such as inlines...

I'm guessing that Vue.js may be a good drop-in for this, but it has been a while since I have used Vue.


I initially thought part of the appeal was offloading the workload to the front end, where your processing power scales infinitely with each user's device. Maybe the benefit turned out to be negligible, I'm not really sure. Can server costs be reduced by offloading the work to the front end?


They absolutely can, if your workload is ideal for this situation, but unfortunately, the most "expensive" (in terms of time, money, computing power, you name it) part of giving a user information is typically the filtering and collation of that information from a much larger pool of information — almost always a pool of information that is far too big and too private to just send to the client to sort through locally.

Even in the most simple scenarios, you quickly find your limits. If you get data back, but it's paginated (and it almost always has to be, for basic reliability reasons as much as anything else), you can't be guaranteed to have the complete set of data in a given circumstance, so you can't perform operations like filtering, pivoting, or sorting that data locally. You have to ask the server to do this for you and wait for the response, just like we've had to in the past.


Dynamic loading of content is a feature of SPAs, but it's not a defining feature, nor unique. In fact, one defining feature of SPAs is the offline capabilities (service workers, caching, etc.), which sits at a bit of a tangent to database considerations like this.


If you can actually offload substantial CPU cycles to the client, yes, you'll save server costs. But the SPA hype has led to a lot of SPAs that work like this:

> User clicks a tab. A request to server fetches the JSON data for the tab. Client renders it to HTML. User fills in some fields and clicks submit. A request to server sends the JSON form data and gets a JSON response code. Client shows a confirmation screen. ...

In this case, you're not saving much by templating JSON on the server instead of just templating HTML.


This was one of the appeals initially, and would certainly still be true if you were doing something very processor intensive that could securely be done on the client.

Languages/runtimes have gotten faster and more optimized, while hardware has continued to move forward. It's also far easier now to add more backend instances using orchestrators like k8s, so it's less of a big deal to have to add replicas.


> Can server costs be reduced by offloading the work to the front end?

I would say yes. One significant benefit of SPAs is that you can produce fairly complex applications without any server logic, only static hosting. The workload is essentially offloaded to the build process and the front-end. You still need to carefully consider the effect on e.g low powered and js-disabled devices ... but these are straightforward considerations.


>Not only do we not seem to be learning from our mistakes

That is a lot of good faith. What happens if they were not mistakes? But deliberate attempt to push Javascript as the one and only de facto approach to web development and Resume Driven Development?

I recently asked this [1],

I dont want to name names, but do any tech company actually apologise after their high evangelism to the world and industry and walk back 70% of their decision five years later?

And for some strange reason this mostly happens to Web Development in general.

[1] https://news.ycombinator.com/item?id=30451916


I know people for whom the traditional way of building a web app is completely foreign. I am curious how you would describe the concept and tools to someone who has never encountered them before outside an SPA architecture.


>....taking advantage of the supposed benefits of SPAs: there are no snazzy animations between states....

If that's the main benefit, let's hear it for MPAs. I want a website that's fast, responsive, clicky, sharp and to the point - not some soft-focus pastel cartoon movie. As the author says, that's fine for audio/video sites (and reasonable for other entertainment-focussed sites) - for information sites it just gets in the way (animated elements - especially persistent ones - are a terrible idea when trying to concentrate on textual content).


> not some soft-focus pastel cartoon movie.

For some reason, designers likes it no matter it makes sense or not, so it is what you get. The current designer trending is just a shit show, nuke the usability for nearly no benefits IMO.


It’s all about state management IMO. There are legitimate reasons to keep UI specific complex temporary state on the client that would be more complex (and slower) if the server needed to hold it. So an SPA or at least partial SPA in some situations does makes sense.

But it does tend to become a hammer for every screw over time…


> It's been so frustrating watch this play out over the past decade.

> I keep seeing projects that could have been written as a traditional multi-page application pick a SPA architecture instead, with the result that they take 2-5 times longer to build and produce an end-result that's far slower to load and much more prone to bugs.*

Its been frustrating seeing the webplatform not play out, seeing so little growing in to SPAs, so little maturing.

Url-based routing is heavily under-represented, tackes on only by the one or two blokes who happened to have some memory of web architecture. This clairifies the architecture both internally & externally.

As bad a problem, single page apps being stuck, forever, at single-bundle apps is phenomenally sad. Splitting bundles into chunks as a manual development task is so hard, so bad. The goal of having web based modules almost made sense, almost happened, but we rafically underinvested in transport technology, cache-digest going undelivered. I continue to think js modules, with import maps- the key tech to making modules modular- is worth it, would help make our architecture so much better. There is mild extra time to first load, but worth it/small, & cached after.

Again we're damned though. Years too late to try & see how excellent it would be to have something like react cached & AOT compiled as from a cdn. Because now privacy concern freak-outs mean this huge advantage of only needing to pull & potentially compile a js module once are gone: site-partitioning rules. We could have had better architecture, been using the language not absurd bundlers, and enjoyed high cache bit rates for common libraries. SPAs just didnt care, never tried at all, we all (almost all) did a horrible job & took way way too long (over a decade) to make modules modular g usable. There was so much hope g promise & such abaurd non-delivery, on the module front, on app archtiecture.

HTTP3 and early-hints still have some promising hopes for accelerating our transports, making "just modules" a possibility & fast, without careful hand optimization. We could still do more to optimize bundles, have automated tools that analyze up front versus on-demand dependency bundles, build http bundles of these. But i hope eventually itcs mostly not super necessary to build webpackage (nor far worse, webpack) bundles.

SPAs still have great potential. More so, now that we finally have some support tech for modules forming.


I mean, of course the co-creator of Django would say this.

I wouldn't recommend newer generation of developers to build traditional web apps, let alone use jQuery.

I don't understand why people think we're still in the age of form submissions and blog posts - There has to be a good majority of us here that has worked on something complex that required SPAs here, no?

Not only would it be detrimental to a young developer's career to suggest avoiding SPAs in regards to hiring, but only limiting that developer to create blog post styled content is severely restraining.

Let them develop their blogs in SPAs, at least when they are needed to go into something a bit more complex, they at least have the foundational knowledge required to move towards that.

What you're suggesting is to learn two things, (one that is inevitably being phased out), and spend the mental effort to discern when to use either one, when the more beneficial alternative is to learn SPAs and just go with it.

No 18-25 year old is trying to make a Weblog where walls of text is the main content - Youtube shorts, instagram reels, tiktoks and all these bite sized content has done a great job at destroying that level of attention span.

They're going to be building something else, something quick and visual, something pleasing to the eyes - and more often than not, it's going to require a SPA.


> I don't understand why people think we're still in the age of form submissions and blog posts

Because we are.

> There has to be a good majority of us here that has worked on something complex that required SPAs here, no?

No. Because complex stuff doesn't require SPAs.


> There has to be a good majority of us here that has worked on something complex that required SPAs here, no?

You can still create a something complex without using any of the common SPA techniques, instead you can use things like Hotwire, Livewire, htmx instead.


not practical for hiring.

and although I find it intriguing that hey.com is using Hotwire, it's still insignificant compared to some SPA framework's ecosystems.

Performance isn't an end all be all, otherwise we should make another article and say "Python and PHP was a mistake for web servers"

There is some give to be had for the sake of practicality.


I'd say it is very practical for hiring, as you will need just 1 or 2 developers that knows JavaScript and (Rails|Laravel|etc..) for every 4 or 5 you'd need otherwise (some that know JS, some that know backend, and some to coordinate/manage them).


I really view it as the opposite.

Prefer the writing of an SPA or serverless MPA.

If you consider that native desktop and mobile applications are siloed applications that coordinate with an API to achieve tasks - this is basically how SPAs or serverless MPAs work.

Part of the reason this is effective is because of the low cost nature of deploying applications like this.

For example; I can write a calorie counter that stores records in the client via indexeddb.

Given all the work is processed on the client, using an http server would be an unnecessary maintenance burden as it would simply server static files.

Rather than host the web application via a self managed http-server, I can just put my html files on S3 making hosting it free and unmanaged.

Should I decide I need to add user accounts and cloud storage - well I can then create a backend that exposes API endpoints to facilitate the tasks.

Those endpoints are then compatible with native applications, should I decide to write native mobile and desktop variations of my web application.

Furthermore, with Web Assembly expanding to offer the ability to write web applications using languages like C++, Rust, C#, Golang and the browser expanding access to OS subsystems like filesystem access - what we are seeing is that the browser is becoming a sandboxed UI toolkit, much like GTK or QT (except without native styling).

If there was anything that would empower Linux Desktops to be compatible with productivity software - it's progressive web applications.

Consider that Photoshop and Office are accessible on Linux via web today.


That's a lot of interfaces to create and maintain. The principle value of backend/MPA frameworks like Rails & Django is that they give you nearly all these interfaces for free in a neat package, which ends up being "good enough" for many use cases below Google-scale.


A calorie counter using IndexDB is a great example of something where an SPA is appropriate - like I said, "default to not writing an SPA, unless your project has specific, well understood requirements (e.g. you're building Figma) that make the SPA route a better fit".

I mainly work in the world of database-backed websites and applications, where going client-only without a backend isn't an option.


> Consider that Photoshop and Office are accessible on Linux via web today.

Never used Photoshop, but Office on the web still sucks compared even to native MS Office.


You're describing an actual client side (mobile or desktop) application made with web technology, not a web application. That's a fair use of SPA tech.

As soon as you need authentication, showing data across users, allowing visitors to see shared data, perform validation of inputs, send notifications when other user action happens, etc you're back in SPA hell.


Why do any of those attributes put you in SPA hell?

If you were to write an Android application that featured Authentication, would you also be in "app development hell"?

Languages, performance and UI decorations aside - it's basically the same thing, no?


>e.g. you're building Figma

i think what it comes down to is if you have to make a decision of "should i build an SPA?" the answer is no. the web is good at doing pages. if your app has pages, use the web's default page mechanism.

and i don't say this as a hater of single-page apps. i love webapps, and i think that building a webapp should be the default for most cases. there's a lot of apps that don't naturally break into a "page" metaphor, and all the technologies that are part of the single-page app concept are great for those cases.

figma is a perfect example, because there's no obvious division between what would be one page versus another. it's not a paginated website that has been built as an SPA, it's literally just one page that has a whole bunch of interactivity.


Don't don't do what I would do. The mistake is forgetting first principles. You don't do something good by focusing on what not to do (like don't be evil). Focus on what to do: KISS, YAGNI, etc.. even DRY is a far lower priority. Software, especially frontend and web are rampant with problems from operating as an echo chamber. Just consider what Ryan did with Deno and had to say criticizing his first project. Yet folks are still wildly supportive of the older technical decisions and go to great lengths to preserve those same mistakes.


Idk I don't think the problem is the SPA itself, it's bad design patterns that that make it terrible as you say.

I think really clean, performant SPAs can definitely be written and I think the overall experience of using a SPA can be much better than a multi-page site if the task at hand requires it.

There should be two parts of the web now really: * Traditional multi-page websites * SPAs that could have been a native app on the device, but are much more accessible in web form and without requiring an install


The alternative to SPAs is not 90s pages reloads. Nowadays you have livewire, hotwire, unpoly, htmx and several other modern solutions.


Speaking for myself, I find it much quicker and easier to build an SPA than a server rendered app. You seem to take the stance that server rendered is the default, normal way to architect and SPA requires justification for its aberrant departure from the norm.

SPA have lots of advantages: fewer languages to learn, easier to deploy, etc.


> SPA have lots of advantages: fewer languages to learn, easier to deploy, etc.

The two examples you give are only true if you don't have a backend at all. As soon as you have a backend, you're back to having to pick a backend language and deploy a backend server.

If your app doesn't need a backend, then I'd agree that an SPA is the way to go.


I think the person you're replying to is implying using JS/Node for both front- and back-end.


Example wealthfront.

All pages are SPAs. I mean, page A is SPA, page B is another SPA and page C is another SPA.


I suggest we go back to iframes


Totally agree.


That just sounds like poor performing developers to me. You don't need snazzy animations between states. Just not reloading the full page is a benefit.


We have a PHP app that doesn't reload the full page. We use jQuery's .load(), which has been around since 1.0:

https://api.jquery.com/load/


Okay - build a complex site with JQuery and see how it goes..


That's what I'm saying: I maintain a complex web app that is built with PHP-templated HTML with jQuery sprinkled in as needed to make things more interactive. It's not perfect, but it's a far cry from the nightmare that people always seem to imagine when they think of "the jQuery days".

Would I use jQuery if I redid it today? No. But this app does prove to me that progressively-enhanced HTML is a valid path to an app today, it doesn't need to be an SPA.


Then you can use Livewire, Unpoly, or HTMX. MVC doesn't mean 90s style reloads anymore.


I think one of the unsolved problems of client-side interactivity on websites is how difficult it is to add it just a little bit of extra client-side functionality to a traditional server rendered website. For example, recently I had to deal with photo uploads on a Rails app, which works fine out of the box at first, until you want to show progress bars and uploaded previews etc. Then you add a couple of client-side Stimulus controllers, maybe a Turbo frame here and there and it works, but then you get to browser navigation and you're screwed, because none of this works if somebody were to submit the form, then navigate back. Now you have to implement lifecycle handling to account for navigation and once you're done, you've basically implemented a SPA, except it's broken into a mix of tightly-coupled Javascript, Ruby and ERB templates.


The problem is the mixing and matching of state management. In a traditional web app all of the state is in the back end, in a SPA all of the state is in the front end. When we share state in between the two it's often messy.

To me the answer feels like it should be "traditional web app for most things, components-as-first-intended for some things". The simplest React example is just one component that abstracts presentation and logic. The only state is its own. It does not handle an entire web app as a SPA, no Redux, prop drilling, it's just the idea of a reusable component as an HTML tag. Same with VueJS and all. If we restrict ourselves to that we're in the good path.

If there was already an HTML tag for your own specific problem, wouldn't you just use it in your traditional server-rendered app? A `<photo-upload url="/photos">` tag that does exactly what you want. Or a `<wizard pages=5 logic="com.domain">` tag. We should create just those components, either in React/VueJS/Whatever or in vanilla JS Web Components, and live with the rest as we used to.

We're basically saying that some parts of our apps are too difficult to mix and match state management, and we should offload all of the state to one of the two sides. In some rare apps indeed all of the state should be on the front end, but the use cases for that are just not as big as they're made out to be.


Exactly, the best approach seems to be where the app is composed of traditional pages with server navigation between them, but each page is implemented as an SPA.

This approach eliminates the need for a client-side router, keeps any centralized page state small, and improves the SEO and bookmarkability of the app.

I have implemented this architecture in several projects, and it’s effective


I disagree with that. There's no real reason for each page to be a separate SPA, most likely only a small part of that page will be interactive, and most of it will be cacheable. Extracting only the interactive parts of a page into components is what I'm talking about. In some cases that'll be all of the page, but there are very few apps that meet that criteria.


Of course if only a small part of your app is interactive, then SPAs don’t come into the picture in the first place.

My point is that for very interactive web apps, this is a significantly better architecture that a huge monolithic SPA


Not only that but bundle splitting allows for extra areas of the application to be loaded at runtime on demand.

All of that said, I'm not opposed to approaches like Next/Nuxt/Aleph.


I mean in this model, if each page is an SPA, then the line between an SPA and an interactive page is very blurry.


I've had this thought experiment before, but where I get stuck is - what happens when you want to share state between the pages? URL params, storing to indexedDB, sticking them in global variables - all viable solutions but all with their own issues.

I'd be very interested to hear how you solve this, or if it's less of an issue than people might think.


>app is composed of traditional pages with server navigation between them, but each page is implemented as an SPA.

I agree with this because you affirm my bias.

Also because I've also had success implementing this structure. This makes sure that state of each page doesn't leak everywhere, which simplifies the state management.

It also loads faster than a big SPA because you don't have to worry about bundle splitting!


Isn’t this basically what NextJS and NuxtJS provide, if you only leverage their static rendering?


The correct answer for this stuff, and it absolutely kills me saying this, is probably something like asp.net web forms.


Yeah, as much pain as it has caused me in the past I think the crucial realization Microsoft made with Webforms and maybe Blazor is that the stateless model of web is fairly crap if you are doing anything beyond displaying static text.

For its time it was some cool tech.


Asp.net web forms absolutely sucked. They fundamentally misunderstood HTML and how the internet even worked.

Modern Razor's pretty good (not Razor Pages, they also suck) which is probably what you mean.

But it's not that much different from Rails, Laravel, Django, etc.


Yes I mean the original version of it.

I'd love to watch people attempt to implement some of the stuff I pulled off with that back in the day now. Occasionally, very complex data driven page flows are required and you could nail that entire problem domain trivially with them.

There is a very large and well known company with a front-facing product used by millions of people which took 5 years pissing around with three different front end technologies to reimplement what we did in 3 months with that and it was slower and heavier and basically reimplemented web forms in python in the end.

It broke the the www intentionally because it sucked. And it still does.


Speaking as someone who is still maintaining a web forms app, that sounds like a people problem, not a tech problem. You can implement literally any workflow if you understand continuations, and it will be simpler and more lightweight than web forms.

If you don't understand continuations, then you'll struggle and invent all sorts of poor state management patterns. Web forms simplifies some of this because the continuation is captured in viewstate, but it captures too much state (there are other warts too of course).


What’s wrong with razor pages? They’re fantastically simple and productive.

And WebForms wasn't a misunderstanding. It was a deliberate and brilliant design that brought WinForm developers and their experience to the web and allowed them to build complex web apps two decades ago that still work to this day.


Yeah, and as soon as soon as they tried doing that, they found out it was all superficial and the fact that there was HTTP calls inbetween meant everything started falling over and they saw weird behaviour. And instead of learning how to work with state management in general, they learnt how to work with WebForm's state management instead.

I'm working on one of those webforms projects right now, a legacy project that they need some fairly trivial tweaks to and it's an absolute nightmare of bad code. And I've done this before, it's become an accidental skill that I can still fix these awful messes.

One of the (many) reasons why webforms and the new Razor Pages are bad is because the code gets split up according to the UI instead of by function. So it gets scattered all over the place and is incredibly hard to do maintenance work on it.

The page-centric code layout that Webforms/Razor Pages/all the old PHP/Perl/etc. encourage is also extremely conducive in encouraging copy-and-pasta code for programmers trying to get stuff done asap.

So not only is it a nightmare to pick apart the code, you can often fine 2 or 3 copies of the same code that you only discover when one page is working as expected, but the same functionality is used somewhere else but someone just copied the page/control instead of actually splitting the page up into controls and re-using functionality.

I've seen this happen over and over and over. A single developer can avoid these pitfalls, a team cannot.


It's contextual. For me, WebForms fell down because it let the average web developer impose too many "costs" on internet-facing projects. A WinForms developer is a very specific kind of developer with a high focus on development of internal, or line-of-business (LOB) apps. WebForms also excelled at this, but brought more reach as people moved away from a preference for desktop apps.

The height of WebForms coincided with an embrace of web standards and accessibility which flows into the Web 2.0 era. You had to jump through a lot of hoops to achieve what was needed WebForms to get it to behave in a web-friendly way. The underlying .NET framework and base of ASP.NET (HttpHandler and HttpModule) was outstanding though.

(I still build/maintain WinForms, ASP.NET, and WPF apps.)


> people moved away from a preference for desktop apps.

Who did; which "people"?

Corporations and "Web developers" maybe; users never asked for it AFAIK.


WebForms was a complete disaster, made solely to allow Microsoft salesmen to quickly slap together a demo.

The problems started when you wanted to put together a REAL application. Because the entire paradigm of a web application programmed like a desktop application was flawed it resulted into an endless stream of headaches and workarounds, with bloated, complex and slow applications as a result.

In that sense ASP.NET MVC was a breath of fresh air.


I just remember massive state being injected into forms, and it wasn't pretty, was massive at times and just unpleasant regarding WebForms. It was fine for internal apps with a network connection, the experience was horrible if you were a dialup user. Especially in components that a change triggered effectively a full server round trip to update the whole page.

It got better by ASP.Net 3, but MVC/Razor was much, much better imo.


By Modern Razor you mean Blazor components?


Razor Pages.


Please, go learn about Remix (https://remix.run), it is the actual correct answer for this stuff.


I don't understand why they choose an example -- a simple user dashboard -- that could be easily implemented as a traditional MPA. It seems like Remix is mainly for the "I learned to program with SPAs" audience.


Remix's _messaging_ may be aimed at the "SPA generation", but its capabilities are light years beyond a Rails w/ Turbolinks or whatever your concept of a traditional MPA might use. I'd encourage you to look a little closer.

My kneejerk rxn was "clearly you don't get it" -- but your critique and (mis)perception speak to their marketing / messaging, which, well, yeah, this is the world we live in. SPA has become the default, and Remix is pushing back on that, hard, and I'm stoked.

I've been doing webdev for a living since 1998, and Remix (like https://every-layout.dev 's "axiomatic css" - but I digress) is doing something profoundly powerful by leveraging the amazing power of the web platform on its terms, using the native APIs, and doing much, much more by being simpler and doing less. It's so refreshing.


Why does it kill you to say that? It’s the better and more productive option.


Because I abandoned it thinking it was terrible. Then I spent a decade finding out how bad everything else was.


JS frameworks with a small footprint are suitable for this.


Do you lose performance because each page has to load the SPA?


Not if the SPA subsequently takes over, (internally) routing links it recognises. (Nuxt does this)


Not much, if you use dynamic loading for heavy functionality that may not be used.


Yes this is exactly right. What I’ve done in the past is use Django to return HTML with JSX mixed in, and have a super lightweight SPA frontend that just hydrates the react components on each load. You can also use form state to communicate back and forth with the server, where sending a response doesn’t refresh the entire page, just a react render diff. With this you get the best of both worlds where your backend can do the heavy lifting where everything it needs to decide on the view is all in one place, and your frontend just comprises of really really generic JS components.

I have a library I’ve been playing around with for 2 years now, I should package it up


> I have a library I’ve been playing around with for 2 years now, I should package it up

Please do.


Agree, we just add React components here and there to server-side rendered HTML and it works great for us. The issue is most companies want separate teams for front-end and back-end and each team wants clear separation of boundaries and responsibilities between them.


The old mythical man month strikes back. Best devops is no ops. Best team is no team.

Of course eventually you need more people and I don't love anyone touch my code, but I'd rather have someone incompetent do entire stack that at least I can review than spend weeks upon weeks communicating basic assumptions.


Adding components here and there sounds like a simple app. Nothing wrong with that, if course.

Different teams exist for a reason. I've deeply regretted working in "we have real Devs that can do it all" environments.

It's not so much about boundaries as it is about competence. Or lack of it with


this seems to be approximately what https://htmx.org/ is trying to accomplish.


HTMX goes about it in the wrong way, in my opinion. HTML code should not carry state and logic. The moment you need something a bit more complicated than the common examples you’re in HTML-attribute soup trying to use a real programming language.

It’s better to just write that as a component in React or Svelte or whatever. It’s more testable, easier to understand, can carry state and logic just fine. It does mean you have to communicate in JSON for a small part of your app, but that is reserved to a few well known endpoints and components like a complex form.

The penalty is loading React or some other lib just for that, but if used in this way the extra dependency isn’t really a big deal as it’s not your whole app. Just use React as a library for components the browser doesn’t already provide.

And by components here I should really clarify as “affordances”. A “primary” button isn’t an affordance, the browser already gives you a button element and CSS classes (or HTML attribute) for that. It doesn’t give you a “multi page wizard with immediate validation” affordance, however, so that’s a good candidate for a component.


the original network model of the web was REST[1]

a core aspect of REST was HATEOAS, which stands for Hypermedia As The Engine of Application State[2]

htmx goes about it in the same way as the original web

perhaps that is wrong, but the web was pretty successful overall

[1] - https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arc... [2] - https://htmx.org/essays/hateoas/


> perhaps that is wrong, but the web was pretty successful overall

Since HATEOAS is rarely implemented fully in practice, it would be simplistic to point to it as the reason the web is successful. The web allows an incredible variety of different architectures —- Some more closely aligned to HATEOAS and some less so. Perhaps this is the reason for the success of the web?


HATEOAS was and is implemented widely and effortlessly in hypermedia, since Fielding was describing the existing web architecture. It has failed fairly dramatically in JSON APIs, because JSON is not a native hypertext. Early API designers tried to shoehorn HATEOAS concepts into their APIs, which was somewhat plausible when APIs were XML, which, when squinted at, looked sort of like a hypermedia. Once we kicked over to JSON (and back to RPC as a network architecture) it became silly.


I’m not arguing against that, just that huge chunks of the web break the HATEOAS abstraction, as it were, and yet, as you pointed out, “the web is pretty successful overall”. It’s not super clear to me that this is because of hypermedia or not in the sense that correlation is not causality.


The Web went all JavaScript instead of HTML sometime between 2000(2005?) and 2010(2015?). But by 2000 it was already wildly successful, so apparently it must have been HTML -- "hypermedia" -- that was the key to success, not JavaScript / JSON.


HATEOAS is great, but that's not my issue with HTMX. It's coding interactivity via HTML attributes that I have an issue with. It just never works. It overcomplicates things and there's an actual programming language right there in JavaScript to solve this.

When it goes past the simplest examples it gets ugly real quick. Here's an example of client-side validation via HTMX: https://htmx.org/docs/#validation. That can't be the future.


yes, if you want to do things client side, hypermedia isn't going to be a great solution for you

htmx integrates with the existing HTML 5 validation API because, well, it's there, and that's what normal HTML does. I don't care much for the API, but that's the standard so we follow it. You can, of course, do client side validation however you'd like and integrate it with htmx using events, since events are the proper glue to tie things together in the DOM. I built a scripting language to help with this, and other stuff as well, called hyperscript: https://hyperscript.org

the "htmx" way for validation is the HTML/web 1.0/HATEOAS way: submit the form to the client side, validate and re-render. With htmx that can be inline, rather than a big clunky refresh-the-page action, which is why I say htmx extends and completes HTML as a hypermedia.

So long as you aren't willing to think about things in hypermedia terms, and htmx as an extension/completion of HTML as a hypermedia, you are going to miss the point. That's not to say that the hypermedia approach is always right, it isn't, but with htmx a lot more of the web application problem space is addressable with that approach.

And, as an aside, HATEOAS isn't great, outside the context of a hypermedia.

https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...

It is, rather, a fairly pointless category error, driven mainly by cargo-cult mentality from the early JSON API era.


>The problem is the mixing and matching of state management

Routing (history management) is also a big problem (I'm assuming you weren't including that in your definition of "state" here).

Some would say, "but React, etc. have solved the routing issues", and that's perhaps the case. But what's difficult is wiring these routers up outside of a full React app. That is, if you truly want routing "solved", you generally have to go all React (or whatever) or just let the browser handle it in the traditional request/response sense. Sprinkling in just a bit of dynamic interaction wherein you want the history managed is purgatory.

And, on mobile, things get even more interesting. Consider the simple case of popping a modal (especially a slide-out). Many users will hit "back" on a mobile device, which they would reasonably expect to simply close the modal. But, if your app/page doesn't intercede to manage the history, the previous page is loaded instead.


Routing (along with hash path and query string) IMO is definitely a part of state. Different route lead to different content, thus different state.

The problem is which source of truth we want to use. I find that I'll need to refer to routes as the source of truth to process information, meaning "almost" every react action become route-manipulation action and it's state is just derived from the route, which is very similar with what MPA already do.


Yeah, I think history is definitely a form of state, but didn't seem like the state the parent to my comment had in mind.

And, I do think they're distinct problems (but related) when you're just adding a bit of interactivity to a page or two, as this sub-thread is discussing.

In a full SPA, it's largely coupled and more inline with what you say: let the route drive the action. But, sometimes you have an MPA with a dynamic stateful feature embedded at a particular URL, but which may or may not need a sub-route of its own. In those cases, weird stuff can happen if the history is not managed correctly.


> The problem is the mixing and matching of state management.

This. But IMHO the right solution is to make the back-end stateless and manage all client state on the client. Each request authenticates itself, and (if you're ReSTful about it) the back-end is simply a database connector/augmenter.

In this paradigm, the SPA is basically a desktop app that retrieves data from a server, built to run within a framework, which happens to be a web browser.

In case you're jumping to conclusions, know that I'm a late-comer to the SPA party, having resisted from its inception until about a year ago, for all the obvious reasons, including those bemoaned by the OP.

Why did I relent? SPA frameworks like React now handle pretty much all the heavy lifting for you. OP, you should check out React Router, which can render this post's examples irrelevant. I'm surprised that in 2022, someone writing to the web UI layer would bother to create code to manage the address bar when there are a thousand ways to not have to.


A lot of the arguments against SPAs in TFA center on broken U/X. This is only incidentally related to SPA architecture, and has much more to do with the current SPA ecosystem. IMO it’s more just a lack of standardization and the broken U/X is just an emergent property of the vacuum. MPAs are great because the technology is old and navigation between pages has first class support in all browsers, not because of some intrinsic advantage of the way application state is managed.


How well does React Router solve these problems if you're not using React fully?


Any non-trivial app will require some frontend state eventually.


Exactly. I've been in the position various times where I'm embedding a JS app on a page to help the user do something highly interactive, usually creating/editing content. And then as it's expanding to integrate with other things on the site, I start to wish more of the site was in the JS-app side of things.

Sometimes I realize a SPA would have simply served the user better, and it doesn't necessarily take much to be in that predicament.

Too many people hate on SPAs by, presumably, just imagining static read-only content like blogs and news. Though I'm also a bit tired of "SPAs suck amirite, HN?"

Put the user first, consider the trade-offs that work towards that goal, and see what shakes out.


The other thing people who hate SPAs need to think about is that rendering blogs/news/static sites isn't a problem that developers really work on anymore. Businesses will just pick from the huge pile of existing CMSs or hosted solutions because it's easier and cheaper. So if you're wondering why all these developers are building SPAs, it's because we're building custom applications, not blogs.


I think the fundamental issue with SPAs is that it's building on multiple levels of technology that fundamentally weren't designed to support being a single page application.

The browser <-> multiple pages paradigm is pretty much how the web evolved, so SPA's just end up being one giant hack to get everything working.

UWP/WPF/any other desktop app framework demonstrates how easy developing a 'single page application' can be without all the cruft you have to add to make a SPA work because it's actually a sort-of-massive-workaround.


Well, the browser has certainly evolved past the point of SPAs being nothing but a hack. The browser has evolved into a heavily generalized application environment, as much as we may want to bemoan that. A good web client can surely demonstrate this.

It's certainly true that you don't get to lean on built-in features like history support, but that's why you can now drive history with Javascript. And all sorts of other things. And if you're smart, you're using a solution that handles these things for you.

Rich client development is always hard—on any platform—, and you always make concessions for the platform you're on. I certainly have to when I'm building iOS apps. But I see no reason for this to dissuade you if you can push a better UX to the user.

As tried-and-true as server-rendered architecture might be, there are all sorts of things it will never be able to do no matter how much of a hack you think web client development might be. Software is a hack. And at the end of the day, your users may remain unconvinced when you preach about what the browser was "meant to do" when they begin coveting a richer experience.

That's something that's often left by the wayside when we discuss these things. We talk about technical superiority and become prescriptive about what we think technology is for. While fun thought exercises as craftspeople, we too seldom talk about delivering value. If leaning into the historical nature of the browser supporting HTML pages over the wire helps you build a better experience, that's great. But that's not the only option you have today.


> Rich client development is always hard—on any platform—, and you always make concessions for the platform you're on. I certainly have to when I'm building iOS apps. But I see no reason for this to dissuade you if you can push a better UX to the user.

I think the difference is how much you as a developer have to "fight" the platform. Having to implement history management yourself very much qualifies as "fighting" to some extent in my eyes… continuing with native platforms as an example, that sort of thing just isn't necessary in most cases – like with iOS, 99% of your "history" management comes for free with some combination of UINavigationController and view controller presentation, assuming the use of UIKit.


As others have said, history management in a SPA tends to be done for you. But if you were to do it yourself, you would centralize your href-click handling in a single spot where you go history.pushState(path) and never bother with it again, it simply hooks into all <a>.

While bad SPAs do this badly just like bad iOS clients do things badly (like incessant spinners, zero caching, and unselectable text), it's a small concesión to make in the scheme of client-side development.

On the other hand, have you ever wrestled with CoreData on iOS? It's like using the worst ORM with the worst Active Record abstraction, easy to get wrong, yet that's the tool you're given. And you're choosing between that built-in solution or going off the rails with another solution with its own trade-offs, and both paths feel like you're wrestling with the platform.

It just comes with the space of client-development, you just tend to get used to wrestling with the platform you have the most experience with, and it's easy to forget that when you judge the concessions that must be made on other platforms.


History management in handled by the browser in SPAs / javascript land too. Apologies if I misread the insinuation here.


It comes for free in mature SPA routing frameworks as well. Who is implementing history management themselves?


At least since HTML5 and CSS3 and ES6 (so many years already) these technologies are made for SPAs. There aren't many better UI frameworks, and WPF isn't that by a long shot.


I wouldn't really say that - CSS3 was made in 1999 so was hardly made for SPAs.

Additionally HTML5 wasn't really 'made for' SPAs, it added features to HTML which could help support SPA's but it's main design decision was to be 100% backwards compatible with the older HTML spec. HTML5 is made for SPAs the same way that a stretched limo is made for commuting.


SPAs are old though. Their popularity is modern, but I was making SPAs in 2005 - and it most definitely was a concern of HTML5 and CSS3; it was a concern even before, during the DHTML days. You're right it wasn't the primary concern - that's backwards compatibility, after the XHTML blunder - but that doesn't mean HTML5/CSS3 wasn't made with SPAs in mind.


ok dont look up how long grid layout has existed in wpf


Who cares? (I know btw I used to be a C# engineer in my previous life - I actually first used WPF when it was still beta)

Here and now, WPF is much worse. Even Microsoft itself now invests into React Native.


> Even Microsoft itself now invests into React Native.

Microsoft invests in React Native as they want to bring applications built in React Native for iOS and Android to Windows.

The fact they are investing in React Native doesn't necessarily mean much more than the fact that they want developers to compile their React Native apps to Windows as well as the mobile platforms.


WPF hasn't had any serious investments for, what, 10 years, no?

But OP's point still stands - it is much less hacky than anything in the web ecosystem even so.


I mean, grid layout has existed in Tk (the one with the Motif look) since the earliest version I can quickly find documentation for, 8.2 from August 1999; Wikipedia tells me that 'table', apparently an early version of 'grid', was introduced in July 1993, that is to say around the time Windows NT (and with it Win32) was officially released. (And of course plain Win32 never got it, seemingly because Microsoft decided they weren’t in the business of selling toolkits despite USER being one.)

The hard idea from the programmer’s point of view is automatic layout, not grids; my impression was that Tk originated it (it certainly brought it to light), but now that I’m looking I’m not sure it wasn’t already in Motif to some extent, so the roots of the idea might go down into the primordial soup of Project Athena and similar places.


In my opinion, UWP/WPF/aodaf makes it easy because it just doesn't implement the things a user is used to in a browser (bookmarkability, back button, ...).

If you ignore those things in your SPA, much of the "cruft" is negligible.


This is pretty in line with grandparent’s comment in that the browser was simply not made for this - aka SPAs are sort of a hack on the traditional page-based approach.


Many desktop apps have back buttons, and of course mobile apps always have them. They're also more reliable; there is no desktop/mobile equivalent of the "warning: need to re-POST this form" problem.

As for bookmarks, that's semi-right, but the web platform conflates the back button with bookmark-ability in ways that are quite confusing. If you want the back button to get the user out of your settings screen, then you must push a new URL onto the history stack using JavaScript, but then if the user bookmarks your site they'll bookmark the settings screen specifically even though that's very unlikely to be what they wanted. Non-web apps that have enough content to need something like bookmarks often implement a similar mechanism that's more predictable, albeit less general.

The OPs article is really a common lament amongst web developers - they're (ab)using a tool that was never meant for what they're using it for. It's a real and very serious problem in our industry.

The whole SPA vs multi-page dichotomy really emerges because the web doesn't have sufficiently rich, modular or low level APIs. If you look at non-web frameworks like iOS or Java then you get a huge number of APIs (usually classes) tackling the low levels of a problem, then more classes building on the lower levels to give you higher level solutions. Those classes can usually be customized in many different ways, so the amount of control a developer has is enormous. If you start with a high level helper class and find you can't customize it to meet your needs, you can reimplement it using the lower level building blocks.

The web isn't like this. Web APIs are an absolute pigs ear of very high level function calls and classes designed in more or less random ways to quickly hack together a solution to whatever need a browser maker had at the time. HTML5 supports peer-to-peer video streaming and augmented reality but not a virtualized table view. In many cases it's obvious that little or no thought went into the APIs, e.g. XMLHTTPRequest has nothing to do with XML, you can open a URL for a custom URL handler but not detect if that URL will work ahead of time, nor even if opening that URL worked once you try it, and so on. Instead of starting with low level APIs and composing them into high level APIs, browsers start with high level APIs and then introduce entirely orthogonal low level APIs decades later, or never.

These sorts of problems just don't occur when writing desktop apps with professionally designed frameworks - even in the rare cases that you hit some limit of the framework, you can (these days) just patch it directly and ship a fixed version with your app. If you hit a problem in the browser you're just SOL and have to hack around it.

Our industry needs to reinvigorate desktop apps. Attempting to convert browsers into Qt/Java/.NET competitors has:

a. Made HTML5 unimplementable, even by Microsoft. It's not so much a spec anymore as a guide to whatever Chrome happens to do on the day you read it (hopefully).

b. Created tons of security holes.

c. Yielded an atrocious and steadily decaying developer experience.


You could just use `history.back()` instead of pushing a new url... beyond this, you may need to listed to history changes so that your UI response... most spa frameworks have a router that supports this without much issue.


Setting limits for future features to keep the code base clean and manageable is something developers could be more vocal about.

Acquiescing to every demand product designers and management throw into the mix is what turns beautiful, easily-maintained codebases into nightmares.

What people are talking about here is writing web apps as a series of small, tightly-coupled spas who manage state within very specific parameters. And that's a great way to build software. Until someone comes in and asks you to draw in the state from one section into two other sections. You have the choice to say a) no and explain why, to b) create a clever "glue" that will be hard to maintain and difficult to explain, or to c) take the time to refactor the code into something more general and complex to allow for the feature on a more abstract level.

Guess which one almost always gets chosen.

The upshot is this: The solution to front-end complexity may not be a technical one, it may not require a new framework or library. It may be a shift in what we expect out of the web. We could always temper our expectations in order to keep our code clean.


The end goal of writing code isn't always a "clean and manageable" codebase.

Likewise the solution to SPAs isn't "say no to features".


I'm aware, but the discussion about the failures of spas always center around their failures in maintainable code. So if one's issues with SPAs are codebases becoming unmanageable, the solution won't be technical, it's going to be managerial.

A start would be devs communicating to product & design what the technical limits are to the current approach so that it informs their decisions.


> Put the user first, consider the trade-offs that work towards that goal, and see what shakes out.

In my experience, every single team I've been part of that was building an SPA was because they put the developer experience and desires first, even if in the mid/long term the dev experience ends up being worse as the project grows.


> Exactly. I've been in the position various times

This is why React is such a popular JavaScript framework of libraries.


I feel like most people that hate SPAs never have to deal with this type of thing, or even only have to work on the backend. They just don't get it. Of course, if you're using a SPA for a static website, you're also doing it wrong but that doesn't mean SPA itself is a bad thing.


I'm living proof that it is possible to simultaneously hate SPAs and understand why they are popular. I've built and worked on a number of them, I don't have a better alternative to recommend, and yet I still hate them because I think they are a clunky solution to the problem. We need something better, but I'm not smart enough to come up with what that should be.


I don't think I'm any more smart when it comes to this particular issue, but my thought is that the answer probably has something to do with more primitives that are suited for web applications being provided by web browsers, so it's both easier to build a web app without some huge gangly framework and so frameworks have more to build on top of and less manual footwork introducing room for error.


"They just don't get it"

Believe me, I get it.

I dealt with building these kinds of features for a full decade before SPAs became fashionable.

Now I'm stuck here watching in frustration as people go all-in on SPAs because they didn't know how to solve these problems without them.


I've also been building these things for a number of decades now. That said, I thoroughly enjoy using the various, modern APIs and standards that have come together to enable single page apps. I'm all for first principles, but which stack are you talking about? The "backend" of the web has been about abstraction for a long time now. Building "these kind of features" has become easier and cheaper on any number of fronts.


I agree. These people hating on SPAs are mostly not frontend developers. They're backend devs that think returning an HTML page is all you need to do. Modern websites are very complex.


Or users who hate watching a whole application load to display some text.


If the only purpose of your site is to load some text or an image, go use a framework with SSG like NextJS. Then you get fast websites without having to pretend that writing stateful updates (e.g. jQuery) to the DOM is fun.


This is what jQuery solved. It made it ridiculously easy (compared to not using jQuery at that time) to add a little sprinkle of progressively enhanced JavaScript to a page. It still works today (and you don't even need jQuery now, as browser standards have mostly caught up), but everyone seems to have forgotten how to do it!


teach me :)


This is my biggest gripe with the direction of Rails.

I completely understand the historic reasons for wanting to keep JS to a minimum and stay within ERB/Ruby, but the JS component-driven UI pattern feels like the correct front end architecture for most commercial projects these days, especially now with better JS tools like Svelte, which feels like rails for the front end to me.

I started playing around with InertiaJS recently and I absolutely fell in love with it. I was surprised to see how niche it still is because it's pretty much what I've dreamed what writing JS with Rails could be for the last 5 years.


Progressive enhancement was largely solved before SPAs became a thing. Its quite easy to enhance the client side incrementally, but SPAs started out as useful for full on Web apps and now everybody uses them by default for some reason.


I agree. I've been thinking about this lately, and have implemented something I think is interesting in Haskell.

https://github.com/seanhess/juniper

It's an implementation of Elm (imagine React if you're a JS dev), but all logic is executed on the server. State is passed back to the server whenever you choose to listen to an event. The view is then re-rendered and virtual dom diffed on the client. Non-interactive pages are just views. If you want them to be interactive, you add a Message an update function.

I used it on a client project and it was pretty delightful.

It probably isn't documented well enough yet to make total sense, but I think it's a step in the right direction.


> Then you add a couple of client-side Stimulus controllers, maybe a Turbo frame here and there and it works fine, but then you get to browser navigation and you're screwed, because none of this works out of the box if somebody were to submit the form, then navigate back. Now you have to implement lifecycle handling to account for navigation and once you're done, you've basically implemented a SPA, except it's broken into a mix of tightly-coupled Javascript, Ruby and ERB templates.

Where in this process did you need to start adding navigation via JS? Did that functionality really require JS, or was it implemented that way just because other parts are JS, and the trend was continued? Could you have used Stimulus controllers to manage functionality on the page itself, but when it came time to navigate to a new page, just done a basic browser redirect?

Saying that SPAs is a mistake doesn't mean that a strong dependence on JS or frameworks in a mistake. It just means that trying to make an entire site fit into a single page load introduces more costs than savings. You can still have pages controlled via React or Vue or Stimulus, but forgoing the router functionality.

In my own apps, I use Vue and VueX to control page behaviors, but each URL is its own Rails page. For example, I have a search page that uses Vue and VueX to asynchronously load, filter, and display search results. Clicking on a result is a basic browser redirect, taking you to the result's page, doing a full Rails page load, and again using Vue and VueX to manage any dynamic state (any static content is rendered via ERB files).

This has created a very clear and simple structure that allows each page to have any dynamic functionality it needs, but no complications from having to maintain navigation or browser history via JS. The browser already does that natively, and I get to spend my time working only on actual features--not recreating the browser's built-in functionality, or debugging my own take on it.


> Where in this process did you need to start adding navigation via JS? Did that functionality really require JS, or was it implemented that way just because other parts are JS, and the trend was continued? Could you have used Stimulus controllers to manage functionality on the page itself, but when it came time to navigate to a new page, just done a basic browser redirect?

In my specific example, the navigation itself didn't happen through JS, but you need to hook into navigation APIs to handle backing into a partially-filled form after submitting to rebuild the UI to reflect the state before the user hit submit. To make things even more fun, Turbo/Stimulus (sometimes?) breaks bfcache in Safari & Firefox so they behave different from Chrome.

Personally, I detest the vast majority of SPAs, especially the ones from Google, Facebook and Linkedin. Not even sure what they do to make them so horribly slow to use.


> In my specific example, the navigation itself didn't happen through JS, but you need to hook into navigation APIs to handle backing into a partially-filled form after submitting to rebuild the UI to reflect the state before the user hit submit.

I'm not sure I understand. The "conventional" solution to this is not to do it -- the reason browsers don't re-fill submitted forms is to prevent duplication. Give users the ability to post-backclick-repost and your DB will rapidly fill up with duplicate rows.

If you do need to back-navigate to a page that has been submitted (say, to edit the submission w/o requiring an edit-specific URL), you have the server re-render the form with the filled values.

If that isn't sufficient, you can always push your form state on the history stack using the history API, so even though this is a bad idea (IMO), you can still do it pretty easily without needing to resort to re-writing nav.

I hate to say it, but I feel like a lot of this stuff had server-side solutions that worked fine circa 2010, but have been almost completely forgotten.


But browsers absolutely do keep the form filled unless your HTML or JS says otherwise. Pressing back after submitting on a vanilla HTML form keeps all values in place. As it should.


Not after a post. If you change the form method to get, I believe this is the behavior.

There are tons of good reasons to prefer this behavior, aside from just avoiding duplicates. Think of login forms.


Yes, even after POST. Why not try it, right here, on Hacker News. The comment form uses POST. I post this comment and go back and my text is still in the input box.

In fact, browsers even have re-submit built-in: Just press F5 on a page that was retrieved using POST. That’s why so many sites use a redirect after POST.


Redirect after POST was a pattern created to bypass the warnings that (most) browsers emit on back-navigating after form submit, while still preventing double-submit (by redirecting to the GET):

https://www.theserverside.com/news/1365146/Redirect-After-Po...

It sounds like this warn-on-double-submit behavior is convention and not required, so maybe it has become more common for browsers to stop doing it.

> Why not try it, right here, on Hacker News.

OK, I did. HN does a 302 redirect on comment POST to the comment GET. When you back nav, you are seeing the response to a GET request to /reply?id=X

(In other words, the form is re-rendered.)

Edit: oddly, however, submitting on form rendered by GET /reply?id=X ends up creating a new comment, so that's just weird server logic. It might explain why you occasionally see duplicate comments on HN.

Edit 2: on backnav to a comment edit, the redirect is to GET /edit?id=X, which is a re-render of the edit form, the process I described in my OP. Not sure what's going on with the back-after-post...maybe a bug?


> When you back nav, you are seeing the response to a GET request to /reply?id=BLAH

With the text you previously entered, which the browser restored. If it did not for you, you have non-standard settings like disabled caching. I can assure you that it behaves the same on Firefox for Linux, Windows, macOS and Safari on iOS: go back, form content is restored.

In fact, the entire internet is full of people trying to get rid of this behavior.


I don't have non-standard browser settings. I watched the form submit in the debugger. It responds with a 302 to the GET request.


Thanks for sharing! That helps me understand things better.

> you need to hook into navigation APIs to handle backing into a partially-filled form after submitting to rebuild the UI to reflect the state before the user hit submit

Was this a hard requirement for the feature or just a nice-to-have? I understand why you'd want that behavior ideally, but it also seems the sort of thing that adds much additional complexity for a problem that isn't (at least in my eyes) severe. Having people re-fill things isn't too much of an ask, I think, and if it's something crucial, it might be better served with a preview page that allows additional changes. Or a "Make changes" link that redirects to a page that can load from the submitted state. There are easier ways to handle it than manually controlling navigation.

But yeah. If the behavior is a hard requirement from the business/product side of things, then using an SPA is no longer an architectural decision made for technological purposes, but is a feature requirement from beyond. And that's a whole different matter.


A little bit of both probably. Supporting it when navigating back is probably a nice-to-have, but persisting file upload previews etc through a form submit that might fail due to some unrelated ActiveRecord validation (maybe a blank field) is a must.


I think the best solution for this is just doing fullstack dev with SSR + partial hydration.

The issue is that, so far, we haven't figured out how to have a good fullstack DX.

Remix is probably one of the best attempts so far, but it still leans heavily towards being a very sophisticated renderer for the front end. The proof is you probably would not use Remix to create a 100% backend project with no front end. Same with SvelteKit or Next.

Until fullstack frameworks get more serious about the backend, we will be in this weird limbo.

In my current project I use Fastify as my main backend framework, and then use Svelte for SSR + hydration. I loose a lot of the frontend sophistication that SvelteKit brings to the table, but OTOH I have an amazing backend framework and total control and flexibility.


At the risk of a pile on, this is what jQuery was brilliant for. And frankly the native browser APIs have caught up enough that scenarios like what you want to achieve are simple to implement with just a script tag and a sprinkling of JS. I don't know what "Stimulus controllers" or "Turbo frames" are but they don't sound necessary.


Yes. Nobody should have to make a SPA to accomplish what JQuery did with sprinklings of JavaScript. JQuery was maligned because many people used it to make all XHR requests all over the place resulting in non-deterministic async behavior. JQuery was made to make adding small bits of JS easy and work in almost all browsers.

Bringing in React and turning that non-deterministic events firing all over the place greatly improved that situation, but this has become if someone needs JS, bring in SPA framework. This is in spite of the browser world getting much better. So just as JQuery was used to make SPAs to bad effect, SPA frameworks have been used to do minimal JS actions in a bad way.


Phoenix LiveView really does solve a lot of these problems.


It does and it has lots of work-alikes on other platforms (though some are not nearly as good). For almost any app that is inherently useless when not connected to the server, its a pretty good fit.


I was just going to say this sounds like a great example of something that Elixir, LiveView, plus maybe a little Alpine.js could handle...


Inertia [1] is an interesting project in this space that might solve some of your problems. I haven't used it, but I believe the general goal is to make it easy to "plug in" client-side frameworks (React, Vue, Svelte) into server-side views.

[1] https://inertiajs.com/


Yeah can someone fix this please. I'm experiencing the same thing with a rails app I'm currently building.

In all seriousness it feels like there must be an elegant way to do more responsive modern functions while keeping the core server rendering concept. It's 2022 after all. But no I don't know a good solution either.


HTMX lets you update the url with hx-push-url=“true”


This problem you described is exactly why SPAs will continue to dominate over the traditional server rendered websites.

Often when I see people arguing against SPAs, they are peddling trivial toy websites that don’t do much and don’t change much.

When you need to build a serious application on the web with quickly growing feature sets and complex state management, just use a SPA. It’s 2022.


The Basecamp guys created Hey which is a mail app not implemented as a SPA. It's probably the fastest mail app I've ever seen while delivering around 40kb of javascript.

If Hey is a toy app you then you must be working on some truly alien projects from the future or something.


While it might be fast, it feels you’re like using Rails app with turbo links. Page loads. Feels barely interactive.

Compare native Mail app and Hey. If anything should be a SPA, it’s an email app.

Also who cares how much kb of JavaScript your mail app uses? It’s completely wrong metric to optimize for an app someone uses every day, multiple times a day and likely has the app open all the time.

(Also haven’t heard people using Hey much since the launch)


SPAs dominate? Off the top of my head I can't think of a single SPA that I actually use regularly. I worked in a niche a few years back where they seemed common (configuration interfaces for embedded systems) but looking at my own habits and the sites that dominate in terms of web traffic, that's thankfully not something that has caught on generally.

I can sort of see the point if the "A" part of "SPA" actually applies to your product, but for the dominant players that doesn't seem to be the case.


for simple cases, you can use alpine.js, petite-vue etc. However, if you are considering "routing", then it is no longer a "little bit".


Did you have a look at https://htmx.org/ ? I think it solves the "just a tiny bit of interactivity on an MPA" kind of thing.


It was probably never too beautiful of a technique, but JSF does solve this problem splendidly. You basically have a view-flow besides the usual ones and it can act in many of the ways SPAs can.


Progress bars and previews are simply a matter of attaching an events to the http request and file input respectively.

When someone submits a form and navigates back, don't you just re-display the empty form?

If it needs to be a form bounded with the uploaded file (maybe as part of a wizard), then why not ask the server for it instead of storing client side state?

Maybe using Javascript without a SPA looks hard because you're trying to avoid storing state on the server as well as bypass native browser features.


I feel like react-rails (https://github.com/reactjs/react-rails) is basically perfect for this. You just make the photo upload its own react component and render it as normal from your rails view. You basically encapsulate the small bit of complex state into a component and it doesn't infect the rest of your app with SPA.


"I think one of the unsolved problems of client-side interactivity on websites is how difficult it is to add it just a little bit of extra client-side functionality to a traditional server rendered website. " I've started using django-unicorn lately and I'm hopeful that it may solve that problem for many use-cases.


Yes. The fitness of an architecture can be partially measured by looking at the cost of feature updates.


SPAs are a pattern that's been applied too broadly IMO, but it's going a bit far to call them a mistake. The aims of an SPA are pretty noble - the idea of essentially removing the network round trip when a user clicks on something is not a bad one. It means things are faster, they work if your connection is flaky, they can do things like offline support. Those are good features.

They might not be actual requirements for a blog or a brochure site but it's not totally unreasonable for someone to want their website to work that way. The question is not whether or not an SPA is a good or a bad thing, but whether or not the cost (more code, more complexity, potentially a series of background requests after the initial page load) is worth paying in order to get the benefits.

When either an SPA or a multi-page site is done well most users can't tell which sort of site they're looking at. That should be the goal. Make stuff where the user forgets about the tech. Make a website that just works, that's fast enough for users not to think 'this is a bit slow', and that's coded in a way you can maintain and work on for a long time. If you get those things right then no one can reasonably criticise what you've made no matter how you've made it.


> the idea of essentially removing the network round trip when a user clicks on something is not a bad one.

Practical SPA's have many more network roundtrips than the equivalent server-rendered web interface. Every AJAX request is an extra roundtrip, unless it can be handled in parallel with others in which case you're still dependent on the slowest request to complete. With SSR, you can take care of everything with a single GET and a single POST no matter what. Images load in parallel of course, but those are non-critical.


I pretty strongly disagree with this.

The distinct advantage of an SPA is that, done correctly, cached data lets you render pages instantly.

Who CARES if the SPA had to make 3xRTT in the background, if it can serve the next page up instantly because that data is already present and cached, it's a huge win.

The server rendered app will ALWAYS have to wait at least 1xRTT for every new render. The SPA does not.

Still doesn't make an SPA the right fit for everything, but on bad networks, we get incredibly improvements in performance by rendering from cache, and occasionally handling cache updates in the background. User's fucking love it compared to waiting 2-5 seconds for every page load, even if they were JUST on the page a second ago.


> The distinct advantage of an SPA is that, done correctly, cached data lets you render pages instantly.

Yeah, but full page caching is a thing, and in my experience, teams writing traditional server-rendered pages are much more aggressive in their use of response caching than are teams writing JSON APIs. Honestly, a Rails/Django/Laravel app with smart caching headers and a Varnish instance in front feels more reliably instantaneous than the bespoke caching solutions each SPA seems to invent for itself.


How does that work if a user is logged in and private data is visible on the page?

I’m thinking of building something that only renders public data on the server, and then later gets that which pertains to individuals through an API. But I don’t want to spend more time on a lot of complicated plumbing than on core functionality in my application.

A SPA seems like the path of least resistance here, since all you have to do is put the appropriate cache control headers on your API endpoints, and let the browser abstract it away for you. I know programmers like to build bespoke caching solutions, been there, done that, but you don’t actually need to.

My biggest gripe with SPAs is the overhead of building and maintaining an API for everything, but GraphQL takes away a lot of that pain.


I've seen a lot of projects that use a full response cache to capture a view with holes where user-specific information would go and then fill in those holes with some client side JS. I wouldn't call those SPAs, though.

But I'd say it's more common to just use "private" cache headers in this case for any page that differs for logged in vs unauthenticated users. The user's browser will still serve as a full page cache, but shared caches will ignore those pages.


GraphQL is a protocol, as such it cannot take away the pain of creating a backend for your frontend.

There are tools which make the creation of the backend painless and utilize GraphQL for sure (i.e hasura), but claiming that GraphQL solved that issue is nonsensical, because writing the GraphQL API is generally way more annoying then the equivalent rest API is


Yes, I obviously mean leverage the existing tools, not roll my own implementation of the protocol, that would just be stupid and counter-productive.

Equivalent tools for REST and non-REST HTTP do exist, but none that I've seen has been as cohesive as for instance Apollo. I used it on a small side project and was amazed at how much time and effort it saved me. The alternatives are a patchwork of different tools with slightly different goals for things like client code generation, documentation generation, playgrounds, HATEOAS, etc. that have to each be evaluated and shoehorned in with the others.


Have you tried to use a heavily SPAed site from a slow, distant (high latency), or metered connection? A SPA that works and feels great from a big city quickly becomes unbearable when internet access isn't as ideal. There are ways to handle this nicely, but maybe 5% of devs actually think about and test that, and no PM will allocate sprint time for it.


Yes - that's literally exactly what I talked about.

I can tell you - My app renders immediately in these cases, precisely because I lean heavily on a cache.

I preload basically all the data a user might need at startup (prioritizing the current page) and then optimistically render from cache.

I'm very familiar with these kind of situations (I have developed software designed explicitly to handle offline-only cases, and I'm very familiar with how bad a connection might be in rural Appalachian schools.)


> I preload basically all the data a user might need at startup

This can be a great strategy when you're dealing with bounded data that's not sensitive, but it's important to recognize that this approach is often inappropriate. A web app may be allowing users to wander around terabytes of data, or it may need to make highly consistent authorization determinations before allowing a user to see specific data, or you may need to keep a highly detailed audit log of who requested what when (e.g., if the app targets healthcare or government users), which aggressive preloading would render useless.


> Still doesn't make an SPA the right fit for everything


I would counter that data freshness and authZ concerns make SPAs a poor fit for most things. What you're proposing in this thread is that good SPAs essentially need to hold all data client side and not have to worry about out of band cache invalidation, which seems like a pretty rare set of circumstances.


It's still worth pointing out.


Awesome, thank you for doing that! I wish more people did. I've had several people tell me that "in a web app you only want to load the data you need, at the time you need it" and I die inside a little.


This works great as long as it is not a first visitor though. For new visitors with a spotty connection this will be a nightmare. Also I assume any new update you release which might need to refresh the cache would also be a problem in this situation. I don't think it's that easy. Not saying that you're not doing it right, just that it is not a trivial thing and almost everyone else won't do this right.


In case of slow internet, SPAs are much easier to make responsive than an SSR application.


Sure, if that's your goal. It won't happen by accident, and the "default" method of SPAs make it really easy to fire off lots of different requests, often on demand.


It doesn't happen by accident in the same way that preventing an SQL SELECT N+1 problem doesn't happen by accident. Poorly written software is poorly written.


Have you tried to use a site that reloads all the data on every click, from a slow, distant (high latency), or metered connection? Same problem.


Of course, and it's usually not as big of a problem because everything you need comes in the page render. With SPAs it's not uncommon to have to click 5 or buttons/menus and then wait for data to load, then click again, and more data.

That said there are plenty of SSR sites that are terrible too. It's not the technology that's at fault, but SPAs and common dev patterns in the community definitely enable and encourage making lots of small requests.


Worse yet, with a traditional web app, you are in control of loading the page - or stopping the load, for that matter. With an SPA, it does all those requests in the background. Many apps fail silently (as in, don't refresh) if request to the backend fails.


> have to click 5 or buttons/menus and then wait for data to load, then click again, and more data.

How is that any different from SSR? Waiting for a full page reload is fine, but incremental loading is not?


It's not that SPAs cannot handle that but it's rarely a requirement.

I've once launched a product on (early days) Google App Engine and it was unusable as purely server-side rendered website because sometimes it would take ages to load or worse an error would come up eventually. With AJAX it could be solved and failing requests could be repeated. It's a pity that this is rarely done but it is of course possible.


Using the web at all is your problem there.

If that's your targeted usecase, send an exe on a thumb drive, and get them to call you back over the phone


I know this is/was not your intention, but to be honest, I find that attitude insulting and frankly discriminatory.

The internet has become essential to modern life, and many people have no choice but to use it. Most of my monthly bills don't have any alternative to paying online. (Some will take a payment by phone but they'll charge a $20 "phone fee" or similar).

There are certainly sites/apps that just can't work without requiring a big and short pipe, but it's extremely defeatist to suggest that it's hopeless and we shouldn't even try.

You're certainly not alone with your opinion. I suspect that attitude is a big part of why this is such a problem nowadays.


> Who CARES if the SPA had to make 3xRTT in the background, if it can serve the next page up instantly because that data is already present and cached, it's a huge win.

The data is never in the cache. It is for a developer because they forget to clear their caches when testing. For most users the data will not be cached unless they're sitting on the SPA all day.

So when the user doesn't have data cached that 3xRTT on their spotty 4G signal the SPA pattern is of no help. Not everyone is sitting still on great WiFi. Someone's cellular reception can go from great to unusable just walking down the street.


>The data is never in the cache.

Then you have a shitty app.

Between localstorage and indexdb - unless you're literally out of disk and having storage being evicted, then if you've loaded my app once (and I mean once, not once this browser session) then I have data in cache.

Basically - don't blame SPAs, blame developers for assuming that the same "request the data every page view" paradigm is still in play.

It's not, and storage is insanely cheap.


> Then you have a shitty app.

This feels a lot like you're making a “No True Scotsman” argument. The behaviour you're saying doesn't / shouldn't happen has been the defining characteristic of SPAs since the beginning and it's still immediately visible on most of them as soon as you have a less than perfect high-speed internet connection. If statistically nobody except the Wordle developer can make something which handles crappy WiFi the problem requires more than just saying someone is a bad developer.


> done correctly

Big assumption. This is what the whole discussion is about. Doing "correctly" an SPA is incredibly expensive.

I can also assure you that when an MVC application is done correctly you can have an equally good user experience.

> The server rendered app will ALWAYS have to wait at least 1xRTT for every new render. The SPA does not.

This is an outdated idea of how server rendered apps work. See Unpoly, HTMX, LiveWire, Hotwire, etc.

I'm personally using Livewire. I make server request in only 2 situations. 1) When going to different pages (I'd need that anyway with an SPA given I need "SSR") and 2) When I'd need to write or read data from the server, which with an SPA would mean I need an API call anyway. Every other interaction is done with Alpine, 100% client side.


Unless their cache is cold and then they navigate back before your first render completes. Often we are not the sole source of a piece of information and first load time counts.


> Unless their cache is cold and then they navigate back before your first render completes.

This doesn't matter. It's an SPA - my js context hasn't been wiped because I'm not letting the back button eat the whole thing and start over. I just complete the request and store in cache either way, and next time they hit it, it will be there.

> Often we are not the sole source of a piece of information and first load time counts.

Sure - caching is hard (full stop - it's really hard). But I think if you actually consider most applications, there's not really much difference between an SPA rendering from cache and a server-side page that a user hasn't refreshed in a bit.

Basically - it's not going to solve all your problems, but in my experience developers really underestimate how much usage is "read only" and a cache works fine.


It doesn’t matter‽

SPAS push giant gobs of logic and CSS ahead of the Time To Interaction and cross their fingers that nobody will notice because it’s not supposed to happen the next time due to magical cache thinking.

People have been ignoring actual research on caches for the entire time SPAs have been around. First time visitors have this as their first time visit. Occasional visitors show up after the last deploy, or like and enthusiastic web users get evicted between loads. And then don’t convert because your site is “always slow”.

P95 statistics don’t tell you if 5% of your users are having a bad experience. Or if new users are massively over represented in the P95 time. You are flying blind and shouting for other people to follow you.


Look - I understand that I'm making a tradeoff here, and I'm very clear that not all sites are the right fit for an SPA.

But yes - I explicitly handle offline-only use cases, and god-damned terrible connections (think <1kb/s). In these situations, users are going to wait a long time for the first load either way.

They call in excited as all get out when they realize they only have to wait once with my application, and not once every page load. It's the difference between having a chance to grab a coffee at the start of the day, vs pulling your fucking hair out all day long.


Your application seems to be a magical unicorn (not being sarcastic!). Most SPA websites I use on a regular basis end up having to reload all of their page components every visit, and frequently on every 'back'.


I'll be really honest here - it helps that it was a direct requirement from day one, and we have financial incentives aligned with handling the offline-only case.

If you handle offline up front - you mostly get bad connection speed wins by default.

It also helps that our app really isn't intended for casual use - ex: we have very few users who just load a page or two and then leave. Mostly they'll be interacting with the page for a duration of between 30 minutes and 2 hours (sometimes as often as once a second).

So yes - very much not a "blog" or "news" or "social media" site, and I think the value proposition for most companies might be harder to sell.

But I personally find it very, very nifty that we can do this at all in a browser. 10 years ago it would have required an installed application, or a port to each mobile platform. Which is sadly still a hard sell in some of the particularly poor school districts we worked with (mostly computers from the early 2000s, if you're lucky a really cheap tablet - like a kindle - floating around)


To be fair, there are SPA frameworks that try to minimize the amount of logic that's shipped to the client, Svelte is an example, which should be as lean and efficient as any hand-coded JS. It's more of a side effect of what frameworks you use than SPA per se.


It's still a specialty sub-discipline though. Getting the base size down is a wonderful goal for a framework. But for the behavior the company is adding, there are both diminishing returns and a lot of surface area for bugs.


> The server rendered app will ALWAYS have to wait at least 1xRTT for every new render. The SPA does not.

Not necessarily. I think you're conflating caching/fetching and rendering here. You can cache rendered views in the same way you'd cache the data for those views.

Turbo does this, for example. When you navigate to a cached page it will display the page from the cache first, then make a request to update the cache in the background. So it's similar to what you describe, but with server-rendered content.


Turbo is making your site an SPA, and hiding the implementation details from you

That's not what the article was about (it was very clearly advocating for the standard browser flow)


> You can cache rendered views

Sort of. There's been no shortage of vulnerabilities where user data has been exposed when an authenticated users request was cached and re-used for subsequent users. It's been my experience that most SSR apps disable caching for authenticated users, or rely on more explicit control of components for partial caching where possible.


Preloading is a browser-native feature, you don't need a SPA for that. And just as often you can't know in advance what data will be requested by the user.


> And just as often you can't know in advance what data will be requested by the user.

This is a cop-out. Storage is insanely cheap, and abundantly available on most machines. Plus the browsers will usually give it to you.

Can I know precisely what series of pages this user might hit? Nope. Can I load ALL the data this user might need to read? Probably.


Storage is cheap but bandwidth is not guaranteed. Getting content from your server to my device is not necessarily fast. Making wrong assumptions about my device's environment makes for a poor experience for me.


if you cache well - you will absolutely use less bandwidth with an SPA.

Sure - that first load is going to be bigger, but literally every subsequent request will use less data.

I try to reward consistent and current customers - and I'm not selling a news article or a social media site...

Now - if you do things like attempt to refresh all of the cache often, then sure, you have some problems to deal with.

> Making wrong assumptions about my device's environment makes for a poor experience for me.

Tough shit? I go way out of my way to make sure that loads on bad lines (or no lines) don't interfere with my users. You could just as easily say "I'm on gigabit why is this app using my disk space? Making assumptions makes for a poor experience. WAAAH".


> Can I load ALL the data this user might need to read? Probably.

You can do that with HTTP/2 server push on a multi-page app, too. The main difference is that with an SPA, you get to reinvent it all yourself.


Yeah - there's a trade off here.

Server push gets you closer with a traditional app, but it doesn't really solve the problem (or rather - yes, you get to reinvent it yourself, but you can do it better, with the specific context of your app)

Basically - My argument is that developers tend to treat SPAs the same as "the website" but with a bigger js framework. And that's the wrong mindset.

Instead, you should think of the first load as an install of a native application. You want to push absolutely everything, not just the next couple of resources you might need (like imgs/css). Ideally you push the current page first, and do the rest in the background while the user can interact with the current page, since you're almost never CPU bound on terrible connections anyways.

You'll get one additional load at login (because now I'm pushing all relevant user data), but after that... basically everything is rendered from cache (including optimistic cache updates based on what the user might be adding/updating).

After that... unless you log out or have cache evicted, you're going to see immediate response times, even while offline.

---

This isn't easy, and most teams don't bother, so if the comment is "If you're treating an SPA like a traditional page - don't bother" then I tend to agree.

but an SPA genuinely opens up options for bad connections that otherwise just don't exist (or are so bad people won't use them)


> Practical SPA's have many more network roundtrips than the equivalent server-rendered web interface. Every AJAX request is an extra roundtrip, unless it can be handled in parallel with others in which case you're still dependent on the slowest request to complete.

This is largely solved with innovations like GraphQL (which you don't need a SPA to use). Pages that require multiple API calls can show their UIs progressively with appropriate loading indicators. For SPAs that have ~long sessions, it's arguably a good thing to have multiple API calls, because each can be cached individually: the fastest API call is the one you've already cached the response for. This is stuff we were doing at Mozilla in 2012, it's nothing new.

There's also nothing stopping you from making purpose-built endpoints that return all the information you need, too. Your proposed solution (SSR) is literally just that, but it returns HTML instead of structured data.


What they're meaning is that the total time is longer for the SPA if you don't go all out (and nobody does).

SPA:

    1. get html from server (1 roundtrip)

    2. get resources (js/jpg/movies/gifs/favicon/...) from server (1 roundtrip)

    3. get ajax calls from server (1 roundtrip)

    4. process answer from server + update page (not a roundtrip, but not zero)
Vs traditional:

    1. get html from server (1 roundtrip)

    2. get resources from server (1 roundtrip)
Also if there are sequential ajax calls required to build the page (like a list -> detail view), it goes up a lot without speed oriented (very bad code style) API design. For instance you need a separate "GetListOfUsersAndDetailsOfFirstUser()" (or more general "getEverythingForPageX" calls). You can't do "GetListOfUsers()" and "GetUserDetail()" separately.

So to match traditional webpage total load time you cannot do any ajax calls in your SPA until after the first user action. And even then, you only match traditional website performance, you don't exceed it.

Time until the first thing is on screen however, is faster in SPA's. So it's easy to present "a faster website" in management/client meetings, despite the SPA version actually being slower.

You can make SPAs faster than traditional websites ... but, for example, you cannot use javascript build tools. Since you need to do the first calls server-side and have javascript process them, and only send the result of the preprocessing, and the resulting dom to the client, after optimization and compression. After that you need to then do image inlining, style inlining, etc. I know react can do it, but does anyone other than facebook actually do that?


> For instance you need a separate "GetListOfUsersAndDetailsOfFirstUser()" (or more general "getEverythingForPageX" calls). You can't do "GetListOfUsers()" and "GetUserDetail()" separately.

This is literally a core use case of GraphQL. But you probably also want to fan out to preload all of the other user data in one API call: that's the whole point of doing it on the client. If you are trimming the handful of kilobytes on the first request, you're paying the price for that on each subsequent load.

> I know react can do it, but does anyone other than facebook actually do that?

Yes. Every job I've worked at professionally since 2016 (when this became mainstream) has done this. I do it in my side hustle. It's really not that hard.

> 2. get resources (js/jpg/movies/gifs/favicon/...) from server (1 roundtrip)

This doesn't go away with traditional sites.


> Yes. Every job I've worked at professionally since 2016 (when this became mainstream) has done this. I do it in my side hustle. It's really not that hard.

Can you link to a tutorial for this? I'd be most interested.


You eliminate #3 by having data preloaders inline it into the html response and move it into #1. But without SSR your rendering still starts after #2 instead of #1.

I think your point is still correct. SPA by default are a huge regression and you have to build and maintain sophisticated web infra to reclaim it.


Yes, but most of the time (getting the HTML and all the Javascript etc) is only at startup time. Let it take a second or two. From then on the user is in the application, and everything is much faster than it used to be with getting entire new pages from a backend every other click.


The more something behaves like an actual application and the higher the number of user interactions per visit, the more sense the SPA approach makes IMO. This is because the initial load overhead of an SPA gets amortized over subsequent interactions.

Blog or corporate/news website on one end and an interactive game or video editor in the other, with a site like FB somewhere in the middle.

It’s not a hard rule and I can already think of counter-examples but this line of thinking is useful when architecting a new project.


I wrote a post very similar to this comment that you might be interested in reading. I broke down what takes time to load, how fast the browser determines something has loaded, and how fast the user can perceive the page loading. I ended up having to take a high speed video and measure frame by frame the time it took to load for user perception.

https://www.ireadthisweek.com/author/ireadthisweek/post/2022...

The TLDR is it takes longer than reported.


SSR isn't a one-size-fits-all solution. Blogs? Sure. Corporate marketing site? Absolutely. Wikipedia? What else.

A webapp where state can exist in various components is a perfect fit for SPA-s. Clicking a button in a widget and submitting the page with every other state, updating them and refreshing the page makes no sense.

SPA-s also help with separating frontend and backend development, they are different beasts. I know many backend devs who wouldn't touch frontend with a 10m pole and not because of JavaScript, but because they don't like design and UX in general.


People generally have a very bad estimate of just how engaged users are with their sites.

These people have lives and other shit to do besides spend all day in my web app. There’s a lot more accidental and exploratory clicks than you think and all of the background requests run even though the user is only on that page for half a second.


> Every AJAX request is an extra roundtrip

But not every AJAX request is blocking or necessary for the site to be usable.


If you can do it in a single GET/POST in an SSR, you can do it in a single XHR call in an SPA. There's no reason it has to be split up in separate XHR calls.

You just have to want to do it.


> When either an SPA or a multi-page site is done well most users can't tell which sort of site they're looking at.

And therein lies the problem. In the _vast_ majority of SPA sites I've been to, they are not "well done" by this definition. It is commonplace for things to break, like the back button as the quintessential example, because the developer(s) didn't spend the time to make sure things work correctly. With a non-SPA site, you generally have to go out of your way _to_ break those same things.

I like SPAs for some things (gmail being a good example), but they should not be the default implementation; there should be a _very_ strong argument before the SPA architecture is used.


You're just saying that bad software is bad. That's tautological.

Without SPA you have horrific 10page forms and the challenge of maintaining state as you go back and forth to make edits.

The hard part is managing state, and there is no way to avoid it if your product is not purely readonly. SPA is one strategy, and a pretty good one.


If one style of writing applications is far more prone to having “bad software”, consider that this is a valuable signal about the difficulty of using that style effectively. SPAs require you to take on ownership of more hard to debug challenges and that's an important factor to consider when selecting tools.


The GP was not saying "bad software is bad"; they pointed out a lot of footguns that are introduced by going the SPA route. "More code -> more (opportunities for) bugs" is the truism at play here.


>It means things are faster, they work if your connection is flaky, they can do things like offline support. Those are good features.

Yet the reality seems to be the opposite. If my connection is flaky (which it often is) then SPA's seem to fail more often and it isn't obvious what is going wrong. They don't seem to be faster either, long pauses between pages is common.


Right. The typical SPA doesn't recover at all from network failures, and you end up poking random buttons hoping that it will reach some sort of consistent state. Then you find that doesn't help and you need to refresh anyway, which is a reboot for the whole 'app'. So much for saving page loads.


I think it's exciting to see frameworks like https://remix.run/ trying to temper the disadvantages of SPAs by relying on web standards and server-side rendering. It's pretty cool that this is a JavaScript framework that can work without any (client-side) JavaScript.


Honest question: why is remix the primary example for this now? Next.js has been doing this for years and is really amazing. It seems like a ton of people had no idea this was even a problem they had until remix came along, and now remix is the savior.

Am I missing something with remix? Is there anything really novel that it does/introduces?


I don't know. I'm not very familiar with Next.js (I'd heard about it but never really looked into it much). Your response inspired me to investigate, and I found an article by a Remix founder about the differences [1]. It's obviously biased, but maybe still interesting. It looks like there was an HN submission and a bit of discussion, too.

There's a bunch of other comparisons out there, too.

[1]: https://remix.run/blog/remix-vs-next [2]: https://news.ycombinator.com/item?id=29983950


> they work if your connection is flaky,

Not saying it can't be done, but I haven't seen many SPAs that handles well flaky connections. Most stall with no indication to the end user, endless spinners or just broken in some random way.


Those are very good points - I wonder how the author would see, for example, Google Sheets implemented as not a SPA


This article seems to be directed at dealing with people that use SPAs to make more content focused websites, which quite obviously are better made with the browser.

However, for an application with a persistent UI, I fail to see how constant page loads and navigation just because "the browser can do this" make any sense at all.

Even the example is a bit silly - SPAs that should be SPAs don't really have "links" per say, they will mostly have buttons, which will have a some sort of defined action. Perhaps these buttons will navigate to some other screen in the application, however, reloading all of the client side state every time one does this is absurd to say the least.

Finally, from a technical perspective, having a clear separation of concerns by having code talk to an API as opposed to HTML being rendered on the server, you remove a lot of complexity.

It feels like the people who write these articles don't actually remember how utterly shit the jQuery days were.


He was just unable to follow and understand the trend, and thinks that therefore the whole industry made a MISTAKE :D


It does feel like the tiring contrarianism type of heading as opposed to a particularly well thought out article.

In the same way I feel lesser able coders seem to dwell on shit-slinging against tech that is proven, works, and has solved innumerable problems when they bang on about how things were so much nicer with vanilla JS and how frameworks are lazy and slow or something, in order to cultivate a sense of superiority.

Any well made tech has it's place, skill and experience are about knowing which tech should be used where.


>> He was just unable to follow and understand the trend, and thinks that therefore the whole industry made a MISTAKE :D

That's not true, though. If you read [his next blog post](https://gomakethings.com/how-to-make-mpas-that-are-as-fast-a...) you'll see he has lots of experience with architecting an advanced multi-page application which works and behaves much like an SPA.

It makes his anti-SPA post kind of redundant and a bit hypocritical, but he clearly knows roughly what he's doing.


I think it's just regular old click bait.


`


First of all, this is uncalled for. Second of all, it's not even an SPA. Maybe do the slightest bit of research before slinging shit at someone?


> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.

https://news.ycombinator.com/newsguidelines.html


Yeah - I think a lot of it might be developers who don't understand that SPA and caching go pretty much hand in hand.

I'll admit that can make your life as a developer harder sometimes (to be blunt - caching is hard - full stop) but an SPA rendering from cache is basically a rocket compared to a server rendered page on a bad connection.

Absolutely no one enjoys waiting 2-5 seconds after clicking the back button to see a page they were just on, but that's the reality of a server rendered app on a bad connection. An SPA with good caching does, in fact, feel like a native app - in lots of good ways.


>no one enjoys waiting 2-5 seconds after clicking the back button to see a page they were just on, but that's the reality of a server rendered app on a bad connection

Wait, what. Don't basically all browsers nowadays keep previous pages in memory exactly not to do this?


If you read [his next blog post](https://gomakethings.com/how-to-make-mpas-that-are-as-fast-a...) you'll see he has lots of experience with architecting an advanced, cache-heavy multi-page application which works and behaves much like you describe.


Yeah, interesting.

I think that approach is fine if you have an application that's light on interactivity (like school curriculum or tutorial, in his case).

And I agree 100% on the service workers (Seriously, I'm right there with him - they're magic). I would also recommend leaning more heavily on persistent storage (localstorage/indexdb) but I'm also trying to handle offline only cases.

I think some of the advice falls down when you're genuinely trying to create an interactive experience. I probably won't expound much farther, since I really try to keep business out of HN, but... I write applications that have been used for following treatment programs in education environments - think speech therapy or ABA for autistic students, where the user is inputing data frequently - as often as once a second - and the UI is updating in response to calculations done client side, that influence the instructors program and plan in real time.

It's a lot of screens, often customized locally on the application, and removing JS (or even just writing pure js) hurts a lot. That's... hard to do with minimal js. Really, really hard.


>Absolutely no one enjoys waiting 2-5 seconds after clicking the back button to see a page they were just on

This doesn't happen because the browser caches the last page.


> However, for an application with a persistent UI, I fail to see how constant page loads and navigation just because "the browser can do this" make any sense at all.

in the same way how re-rendering the screen in a 3D game for every frame does it. Or how you buy a new pair of jeans instead of meticulously learning how to patch the old ones. It solves a lot of problems even if it sounds suboptimal. It was the standard practice when computers and networks were much much much slower, therefore it cannot be that bad.

I think SPA and their entire ecosystem give some sense of optimality to purists which is probably shortsighted and wrong.


jQuery days were bad?


there were certainly people that committed war crimes with jQuery. Just like every other framework in existence. I yearn to return, honestly.


Nah, they weren't a mistake, at least not in isolation. Much of modern software development is a mistake, generally speaking. When you look at it that way, SPAs only failed in the sense that Object-oriented Programming failed, as well as the failure of various design patterns, microservices, write-once-run-anywhere, test-driven development, decentralization... I can go on and on with the number of engineering and computer-sciencey crap that never really lived up to its promises.

SPAs only suck as much as we suck. Just admit that SPAs are like any other tool, and if we're going to swing back to apologizing for goto-statements (something I don't necessarily disagree with) then we can't at the same time act like SPAs are a mistake in and of themselves. We collectively continue making mistakes and instead of owning up to our collective lack of craftsmanship, we are blaming the tool.

Only when we shed the mindset that informs us that computer "science" is actual science might we then be in enough touch with reality that we seriously impose some standards upon ourselves to address our clumsiness.


> SPAs only suck as much as we suck.

We suck pretty bad. SPAs are often the default because that is what the framework dictates and nobody trusts a JavaScript developer to delivery any kind of quality product without some epic massive framework, including ourselves. This is the standard for hiring, performance, and delivery and nobody dares deviate. If its not an NPM package written by an anonymous stranger or an API on the framework its not an option. We don't trust each other and management doesn't trust us, so let the framework dictate our every decision.


why do you think TDD failed :(

Doesn't every other science field work pretty much the same way though? We make mistakes, we iterate, we make new and better mistakes. It may be too obvious in software cause it's easy to create stuff and iterate very fast cause there aren't so many natural constraints. Someone may even argue that this is what makes software also an art.


TDD, or rather the effort behind TDD, failed because its proponents couldn't help but preach it as a cornucopia of solutions with the insistence that it's appropriate for every circumstance. When you are familiar with the domain of a problem, TDD makes a lot of sense. For the project I'm working on now where I'm effectively learning at just about every step of the way, and said project is intended for very few users, it's highly questionable if I could have benefited from TDD. Even when I'm not doing something unusual, TDD doesn't always make sense.

Any research that currently exists around TDD is of poor quality and proves little, but TDD fanatics (not sure how many of them exist anymore though) speak like there's objective evidence in their favor. And some call this sort of thing "science"?

By the way, when I mentioned science, I was specifically referring to the concept of "computer science", of which there has been very little actual use of the scientific method in our lifetimes. Computer science is of course a legitimate area of study; if our heads are still so in the clouds that we are still calling it a "science" then it's no surprise that we're continually losing touch with the very technologies we create.

TDD also shot itself in the foot with this (IMO toxic) insistence that the tests are the documentation, which is a nice principle to keep in mind, but I've never seen it come close to replacing actual documentation. A simple comment block describing the why of a function is always better than scrolling through tests and reconciling inconsistencies with the mocks that were used. And yeah, I know that TDDing isn't really about "mocking", but let's face it [...].

Perhaps the worst thing of all is that TDD is founded on a somewhat dishonest implicit premise, which is that if you aren't TDD'ing then one must be either not writing enough tests or are writing inferior code because one is writing tests after development. TDD fanatics, in particular those freshly inducted into the cult, often think it's either TDD or nothing, and if you're not TDD'ing one particular part of an app then you're not following "standards". Maybe TDD is fine as a form of guard rails when you're a junior developer, but by the time you're a high mid-level and in touch with reality then you're going to intuit whether a unit of work will actually benefit from TDD.

TDD isn't bad at all in terms of its mechanics. It's a great idea, and I've used it many times. In fact I even TDD'd an entire web app that handled payment processing from start to finish. The part about TDD that "failed" was this belief that it is actually a generalized solution that needs to be applied everywhere, which lead to it both being misapplied as well as dismissed when it becomes apparent that those promises of better and easier code/process/readability often don't magically manifest.

This is really the story behind just about every failed idea related to our field, not just TDD. SPAs were supposed to make pages snappier because it meant server responses to different user actions would require fewer bytes and no page refreshes. Instead of viewing SPAs as a tool with a purpose, people had to treat SPAs as the end-all-be-all of frontend development and a way to do everything, and now the world is stuck with many SPAs that are likely worse than if they were done as regular webpages.


REST API's (literally, "Representational State Transfer", ReST) are a very nice match with SPA's, because now the browser is the sole source of truth for application state (and the server only has to produce some JSON data instead of entire HTML pages).

Literally the entire point of REST is that you don't have to maintain server-side sessions, and you shouldn't. This also has benefits for scalable applications, in that you can transfer your running application to a cluster of servers on the other side of the world, and it'll still keep working.

The trust boundary between the front and back ends can be a huge boon to security as well, and some things are literally impossible to do with a multi-page app, and you can still maintain URL parity with new actions in your page.

That's why SPAs were, and are, great!

That doesn't mean everything should be an SPA! A photo, news, blog, or recipe website might be better laid out as a separate page for each individual item. However, there are tons of other types of real applications that can only be built as an SPA.

Are they unwieldy, sometimes hard-to-code and maintain, and sometimes result in a poorer experience? Yes, but that can also be true of a multi-page app, especially if you're trying to shoehorn an SPA experience into a multi-page app and want to maintain the same user experience as a user moves around within the app.

A multi-page "app" isn't really an app anymore; it's a server-side app that is producing multiple pages. A single page app literally is running in the browser and only sending a bit of data back and forth with each user interaction. It's just a completely different model.


I don't disagree with most points here, but I did chuckle at this one:

> YouTube is a great example. Being able to keep a video playing while you explore other videos is fantastic.

I hate that (mis)feature. When I click something else, my attention is on the new thing. Having to go find the little still-playing video window to close it is a hassle.


As a counter: I love it. It allows me to add videos to a queue without having to build a playlist beforehand. And halfway through the queue, I can add more without having to navigate away.


Yeah I like it as well, but there should probably be an option to disable for those who don't like it.


And if you don't, you're probably going to open in a new tab. Too many webapps are designed with complete disregards to the fact that browsers have tabs.


Some like it some hate it. I hate it too but generally I like simpler things. I wish we could opt out of some features so everybody would be happy.


YouTube is awfully slow - back when I used it I dreaded accidentally mis-clicking on something because it would initiate 3 seconds of stuttering and shit moving around everywhere while their terrible SPA (poorly) reloaded the page.

Nowadays I use Invidious which is a proxy that renders good old server-side-rendered pages and it manages to be faster despite being a proxy.


agreed. i dont think youtube needs to be an SPA and simple pages work better. For comparison: porn sites.


I hate SPAs. I would never do another SPA again if it were up to me. It just adds too much mental context switching and overhead. I can develop fully server-side apps that are lighter, run faster, and at least 20% less development effort (I actually compared that for the same task: https://medium.com/@mustwin/is-react-fast-enough-bca6bef89a6). So why would I ever do an SPA again if it were up to me? I would use https://github.com/jfyne/live which is inspired by Phoenix LiveViews. This is my professional opinion, having many years of experience in both kinds of web apps.


When I was reimplementing “reset state” for a logo click I knew it should have been serverside rendering.


Template based UI programming is like going back to the year 2007. Your views should be reactive. Elm, Flutter, React, Et Cetera understand this. Having a function that takes data and returns a view is much better.


Isn't a template essentially a "function" that takes data and returns a view? I don't have much experience with UI programming but I don't really see the conceptual difference.


No, the template is usually a static file that is read into memory and served over HTTP or locally if it's a JS only site. But before being served, it's being picked at via imperative operations. And if it's really bad, then when a user presses a button it triggers some event that actually statefully changes the view itself.


I don't see template operations as imperative, rather I see them as declarative: you write the HTML and declare lists/placeholders inside the HTML. You also don't generally do much with the data except insert it into HTML; the rest of the backend is responsible for fetching/etc the data.


Yeah, but the function itself doesn't return that view. The framework you're working in essentially composes the view based on the associated code and template. This is fundamentally different from just returning the view itself in the function. These are all declarative approaches, but leaving your view in code is a lot better.

Every template-based framework I've seen has involved a lot of magic, syntax and DSL. That's not to say that JSX isn't a DSL as well but you can return regular React.Elements just as easily. You shouldn't need to remember a unique expression system just to be able to conditionally render. The premise that you're just not doing much with the data except dumping it into HTML is less true the more complex your site becomes. Think of the loads of conditions and evaluations that happen when you go to your Facebook wall or friends list.


It's okay because despite JSX being a DSL you don't have to use it even though everyone does?

Your line of thought is a large part of why I prefer ClojureScript to Javascript for SPAs. With reagent your components just work like the language, with nothing particularly special to remember.


Surprisingly enough, I'm a clojure developer and I really do like the hiccup syntax in Reagent. I haven't had the courage to use it in anything professional yet though.

EDIT: To answer your question, I actually think that JSX is a downside to React. I get that it was needed to convince the web crowd to adopt the library, but we need to stop pretending like we're writing HTML. We're not. The framework will take our JSX and do whatever it wants to it. I wish we just treated the DOM like a compilation target. That's actually what I like about Flutter - it treats the view as a render target and not as a document that you write to. Flutter web actually uses the canvas API to draw actually. And because of this, you can define your View model in a much more sane way (while still being fast). E.g. Flutter's Layouts make much more sense than HTML and flexbox/grid. They managed to separate the layout from elements. https://docs.flutter.dev/development/ui/layout


Amrita for ruby. That was one of my favorite template frameworks. It parsed your HTML document as HTML and used element attributes to mark which elements corresponded to which keys in your data model. Incredibly simple and unreasonably powerful.

These days.. I just an Amrita like engine I've built myself and I just use <template> elements directly in the document. It only takes 40 lines of code to match child elements to data model items and fill in the template. Your browser already has a powerful parser and literal templates built in.. why invent a whole new language just for HTML templates?

I've never been able to fully apprehend the reasoning for JSX.


That is a usual way this is done but not what it means. In fact, HTML recently got a `<template>` tag which is definitely not a separate file or served separately. Vue.js also refers to the HTML-ish portion of a component as "template".


We can debate syntax but a Go or Python based template language is not conceptually any different to JSX based syntax. Both are essentially functions that take some input data and return the data formatted in HTML.


Sure, if you implement a trivial single-task app (one tiny feature of a Calendar app) with an unfriendly UI, you don't need an SPA. That doesn't prove anything.


It doesn't "prove" anything, but it's a data point, and quite a lot more thorough than your dismissive comment. If you want to make a well supported counter-argument, please be my guest. It's easy to criticize, it's harder to make a thoughtful argument.


[flagged]


I kind of have to agree here. There are very few times I've ever worked on a SPA and felt like throwing it all away and using the alternative.

The last time I thought about this, which admittedly was many years ago, the alternatives that I knew about were: server-side frameworks like ASP.NET MVC Razor, PHP, RoR templates, Node.js EJS, Jade (now Pug), and static HTML.

Nowadays, you can create extremely elegant and performant SPAs with tools like Next.js and Remix, so I really couldn't agree less to the OP.


This reads like a Reddit comment. Let's try to be better than that here.


Harsh. Dude ported liveview, this isn’t amateur level work.

It’s a useful brush but you can’t use it to paint everything.


I just want to clarify, I didn't create the LiveView port, but I think it's excellent work.


You're being very rude and dismissive. I may well have more SPA experience than you do. Don't make assumptions about people and then use that to dismiss something they say - we're better than that here on HN.


I am indeed (rude). You can have 100 years of experience, it doesn't make you more credible at thinking that engineers who makes choices that are different than yours as incompetent, by deeming them as mistakes. I mean, if a company wants to choose a tech stack, there are so many things to consider for that choice. Your article is just generic ideas about the fact that SPA are mistakes and SSR apps are better. You're not taking into consideration any of each industry's and company's means, priorities, specificities.

In fact, your 100 years of experience are what's hindering your ability to see how useful SPAs are. And you're resorting to that same old narrative of "it was better before". It's also the niche you decided to position yourself in to sell courses and stuff. Which makes your ideas even more suspicious as they can't be unbiased. Like in politics, they'll dogmatically say that their opponents are wrong, just because they have a "market" to preserve.

So yes, I'm rude, as much as you're inconsiderate to the vast complexity of software engineering, just to preserve your niche market.

And as you say in your website: "Hate the complexity of modern front‑end web development? I send out a short email each weekday on how to build a simpler, more resilient web. Join 13k+ others."


This is rude


Instead why don't you let us know why you think that position is incorrect rather than resorting to reddit-tier insults?


Quoting another comment to OP

"I am indeed (rude). You can have 100 years of experience, it doesn't make you more credible at thinking that engineers who makes choices that are different than yours as incompetent, by deeming them as mistakes. I mean, if a company wants to choose a tech stack, there are so many things to consider for that choice. Your article is just generic ideas about the fact that SPA are mistakes and SSR apps are better. You're not taking into consideration any of each industry's and company's means, priorities, specificities.

In fact, your 100 years of experience are what's hindering your ability to see how useful SPAs are. And you're resorting to that same old narrative of "it was better before". It's also the niche you decided to position yourself in to sell courses and stuff. Which makes your ideas even more suspicious as they can't be unbiased. Like in politics, they'll dogmatically say that their opponents are wrong, just because they have a "market" to preserve.

So yes, I'm rude, as much as you're inconsiderate to the vast complexity of software engineering, just to preserve your niche market.

And as you say in your website: "Hate the complexity of modern front‑end web development? I send out a short email each weekday on how to build a simpler, more resilient web. Join 13k+ others." "


wow you must be incredibly smart and discerning, being able to confidently make assumptions and judgments about people's lives and background based on one or two HN comments, and then dismiss them as an ignorant and incapable fool.


I have worked as an engineer on products in a lot of different domains in my career including:

- edtech

- real estate

- HR/payroll software

and every single project I've worked on had enough complex state to benefit from using a SPA.

I've also worked on one complex web project run by someone dogmatically against client-logic, and it was absolute hell. The codebase was full of janky hacks to approximate the same complex session state and full interactivity that a SPA provides trivially.

This whole thread seems like a huge echo chamber of people who seem annoyed that FE development is getting too complex.

But do you all really think that the entire industry is so disconnected from its needs, and the entire community of FE engineers just have their heads in the sand about the requirements of working in their domain of expertise?

And to everyone saying SPAs are sometimes useful in very rare occasions, what web apps do you use regularly? Gmail? Jira? Slack? LinkedIn? Figma? Notion? Docs? Dropbox? Airbnb? Netflix? Airtable? ...

How many of these use javascript to do the heavy lifting on the client?


In your experience how often did the complex state have to be modelled on the backend as well as the frontend, to provide validation of submitted data?

And for personal interest, how did you handle streaming in updates and edits from multiple people, and stopping people from editing eg the same real estate listing at the same time? I know it can be done but I find it easier with server round trips for every request.


As for your question about persisted data, yeah it definitely needs to be modeled on the backend, right? How much modeling should be done on the FE is a good question. This question is sort of why GraphQL exists. GraphQL provides a contract between the backend and the frontend about what the data sent between the two needs to look like. Apollo provides a pretty good and easy-to-use GraphQL implementation in my experience.

As for real-time updates, this hasn't been a real concern in most of my professional work. If I needed to support this, I think what I would do is:

1.) Define a clear set of actions for everything that needs to be streamed to users real-time (IE, I wouldn't try to stream the full state tree itself, this is where conflicts would arise)

2.) Determine how each action mutates the graphql tree

3.) Add a process that watches for these actions at the top of the React tree and manipulate the GraphQL cache as necessary in response to the incoming actions, (coming from a WebSocket connection, for example).

If you stream the actions, you may be able to resolve simultaneous edits naturally. Or, if you really want to block simultaneous edits, I think you should 1.) definitely ensure the edit is really blocked on the backend, and 2.) stream on "disable" action to the user via webso


State is a pretty loaded word, but what I meant more specifically is complex application state, and by that I mean the ephemeral state that manages a user's usage of the application.

For example, in the real estate use case, there may be, all on the same page:

- a table of listings and grid of listings that can be toggled between

- a map that can be displayed alongside the table with all the listings plotted

- dynamic updates on the map that highlight listing when it is hovered on the table

- a detail modal that displays whenever any listing on the table or map is clicked

- pagination that can controlled from either the modal or the table

- selection state for each row on the table and a bulk action bar that conditionally appears when the row is selected.

- filters and search that need to work quickly and not disrupt the user experience

None of these things are particularly unusual or groundbreaking, but trying to control all these stuff via server-rendering or jquery would be a huge mess and potentially a jarring UX.


These are good concerns. IME it's important to be able to share code and schemas & domain data manipulation code between BE and FE like you can do eg with Clojure + ClojureScript.


Every week someone posts some half-baked blog ramble about how SPAs are bad (except for media sites!).

How about this - I am just as or more efficient working with SPA frameworks such as React as working with server-side-rendering. I have invested in a skill and toolset that can deliver any sort of website or web application from blogs to Youtube to Figma. I don't see any reason I would invest in learning an MPA framework that is only good for a subset of that.

If you are delivering content-only website with simple forms you might as well just use a CMS and call it a day.


Why are wood shops loaded to the rafters with tools? Because there is no Golden Hammer, only people who think they’ve found one.

You’re saying you’re doubling down on a single solution, which is probably not actually true, but you are surrounded by younger developers who will copy what you seem to be doing rather than what you’re actually doing.

All of these unresolved arguments are about team dynamics, not technology, which is why they never get resolved. Because we talk about our experiences or “objective” things like logic.


> All of these unresolved arguments are about team dynamics, not technology

No, it's definitely about technology. Let me simplify this for you. Technology A can do thing 1 very well, and thing 2 decently. Technology B can do thing 2 well, but can't do thing 1 at all.

In reality, people who only do technology B claim technology B is necessary to do thing 2 and go to great extents to write blog posts claiming such, while people who only do technology A get on with doing things 1 and 2 without feeling the need to rant.


Yeah that’s not a technology problem, that’s an ego problem.


I, as a user, hate them with passion.

I regularly stumble upon stale data in SPAs, even in big names' like Linkedin, Jira, Github, etc. I wonder how the hell corporations with thousands of engineers, lot of them being brightest engineers in the world, can't make proper SPA.

Can we just go back to server side rendered pages, with a bit of JS sprinkled in, please.


Github is not an SPA. They use the middle road of dynamically loading HTML snippets, using webcomponents, and lightweight JavaScript. Any problems you see with most of Github's interface should be chalked up to server-rendered pages with sprinkled JS.


I dread using GitHub's fake client-side navigation. For me it's always faster to just load a link "for real" rather than wait for their terrible JS to pretend to load the link for me.


Interesting that you use Github as an example, when Github is not an SPA. So basically just admitting these problems aren't necessarily inherent to SPA's, but to the developers/teams building them. They may just be a bit harder to get right in SPA's.


> Github

GitHub is a great example of a site that would benefit from actually being an SPA, imo. It is server rendered pages with JS sprinkled in, and you can easily end up in the situation where parts that live update fall out of sync with parts that don't. For example, being able to see that an issue is closed while the badge on the Issues tab still says you have one open.

I share your general sentiment over the state of them though. It's not actually that hard to avoid breaking navigation buttons, having links open in new tabs correctly etc, so I don't know how it gets screwed up so much.


This ship has sailed. SPAs shortcomings are widely known and addressed by frameworks.

Whatever MPA alternative you bring will need to address other shortcomings. There will always be something quirky due to building applications in a technology designed for hypertext documents.


Many of those shortcomings were solved 15-20 years ago. Far too many SPAs are little more than old school request and response, GET and POSTs. I've literally had to drop into the dev tools console just to push through a page with 7 sets of radio buttons because Angular broke in some spectacular way, on a page that is and should be boring (prescription refill page).


I have in my life used a couple of SPAs where I thought "Man, this works really smoothly and quickly!" Like there have seriously been a couple really good ones.

They absolutely can be done right, I've built one I think is done right. I've built a couple. I don't think the average team has it in them to build a good SPA.

The majority of SPAs are user hostile hot garbage.

The 95% of other SPAs were full of minor frustrations that could have been easily avoided by not being an SPA including

  1. Completely unhandled errors with no indication anything is wrong. 
    - This is a majority of SPAs. Any request error on flakey internet? Don't bother tell the user, just break.
    - The sheer number of times I've needed to open inspector to see that something has gone wrong… At least an actual non-SPA request that fails will show you an error page.  
  2. Broken… native… everything 
    - 2.1 Especially "broken open link in new tab"/middle click - see next.
  3. Inability to run the site in multiple tabs.
    - State is entirely local so state gets broken *easily*.
    - Some just outright refuse and warn you.
  4. Adding everything I do to the browser history api
    - Makes it even worse than the back-button-not-working of old. Have to hit back a thousand times.
    - Scrolling SHOULD NEVER add things to my back button. NEVER.  
  5. High CPU usage on a site doing seemingly nothing
    - Looking at you, Medium.  
  6. Generally being way slower to use than the previous non-SPA website
    - Looking at you, Wells Fargo.
  7. LOSING MY POSITION ON HITTING BACK
    - The sheer number of times I've been scrolling down an infinite list for like 30-45 seconds, click on something, nope not what I wanted, hit back, returned to the top of the list is entirely unacceptable as a society.
    - This is very often combined with 2.1 and 3 to make for a truly infuriating experience
Like don't get me wrong, you can do a lot of this junk in a non-SPA but the failure state is almost always better.


> Inability to run the site in multiple tabs. State is entirely local so state gets broken easily.

I have seen this more for sites where a TON of state is stored in the backend, the site expects the user to have 1 tab that is kept in sync with what the server thinks that user session is doing.

Heck I have seen sites that throw up errors asking me not to open multiple tabs.

I have also seen sites where if I have Tab 1 on Page A, and I open Tab 2 to Page B, then go back to Tab 1 and navigate to Page C, if I go to tab 2 and try to navigate anywhere, it just goes to Page C.

Fun times when the server has too much state.


I'm not clear why the author thinks that "media sites, really" are the only SPA use case. Have they never used webmail (like GMail), map apps (like Google Maps), or social networks (like Twitter)?


I give you Google Maps is a perfect example maybe for SPA.

But Gmail? Why do we need SPA for that? Receiving emails notifications could be websockets and clicking on email should go to a new page displaying the email. I dont know if the initial gmail was SPA or not, but the current version is very very slow and consumes a lot of memory to display some emails that worked even in terminal clients, remember Pine?

The same for twitter? I anyhow have to press "Load 39 new tweets" to load the new ones so why is it a SPA? Just for that notification? If you would give me a twitter client where I need to refresh the page to load new tweets but works faster and consumes less memory I will happy use that.


Do you want a full page reload just because you deleted an email? Or flagged it? Or marked it as spam? Or even do you want your webmail in frames just to have a reader pane? Webmail is a great SPA candidate.


I do agree that perhaps email clients are a good SPA candidante, but at the same time most of that type of interaction can be done with a sprinkle of JS.

Because of a few simple interactions that are better without a page reload, people write the ENTIRE application on the frontend, with everything duplicated (modeling, validation, routing, error handling, etc).


[flagged]


You keep making this ad hominem attack without presenting any evidence.


You’re really enjoying that brush today.


Don't be an asshole.


Anyone can choose the tech stack, patterns, or whatever that he likes. But developers enjoy being absolutists, and deem something they don't like as a MISTAKE, or would tell you that the way they think is the ONLY way to think.

You do your app the way you want depending on the Ux you want to provide, on the tech you enjoy implementing, on the patterns you like to follow.


> Anyone can choose the tech stack, patterns, or whatever that he likes

Maybe for personal projects, but 99% of us have to use the tech stack, pattern or whatever of our employer that was decided on (presumedly by some consensus at some point in the past). Publicly pointing out the flaws in what might have made sense then but might not make sense now is a Good Thing so that those flaws might be taken into consideration in the next round of consensus building.


I don’t understand what sort of cave trolls think I get to decide for me. Even as a lead I am making decisions based on the situation on the ground and those are informed by what people are comfortable with and what I can help them get comfortable with. It’s a team activity.

But every conversation has someone spouting off like these are experiments in a Petri dish. Petri dish projects don’t matter. Haven’t for a long time.


That's what interview processes are for. And if the consensus goes for a stack you don't like, let's say for example for doing and SPA, then maybe it's not a mistake as stated in the article. Unless you guys think that any engineer that makes choices different than yours is incompetent.


Don't work for a company running a tech stack you don't like.


Don't limit this to SPAs, include the Jamstack, which has all the same problems, and the false promise that if you can statically render a few pages or parts of pages and put them on a CDN, everything will be fast. It won't, because to load dynamic content, you still have to do a lot of work and talk to a (gasp) centralized API over the internet.

SPAs and Jamstack favor developer convenience over end user experience. Let's have fewer loading spinners, and more SSR-by-default for pages with dynamic content.


But there's a cost savings, right? JSON requests for just the data necessary vs. sending the whole HTML page each time.

Personally I'd prefer to develop a website the old-fashioned way, but I see that the bandwidth savings is a major point for SPAs and if you're running a business...


SSR by default, and then if you want, client side hydration and navigation so subsequent page loads happen in the client. However, the size difference between an HTML response and a JSON response is negligible, and HTML responses don't have to wait for Javascript to download, parse, execute, kick off off a request over the internet, get the data back, execute the result, and update the DOM. Browsers have literally decades of optimization to show HTML to users as fast as possible, and doing this in Javascript is fundamentally slower.


> developer convenience

This is really, really important though.


Am I the only one preferring good old-school, "boring" stacks that I can run entirely on my own machine if needed and understand the sequence of operations as opposed to relying on dozens of third-parties, services, APIs, etc just to do what a stupid PHP script on shared hosting could do 20 years ago? I don't consider the modern complexity as convenience.


I assume you're not developing modern web applications then?


One thing that I found working in a bigger organisation is just how well React encapsulates Design Systems. You can have a dedicated team of talented frontend engineers that builds a really solid design system, and then other folks building the actual user facing bits can use them, and everything will be nice and consistent across all the various apps.

For example look at how well this has been going for Uber.

Now this is certainly possible, and even relatively easy to do in server side frameworks, but as others have stated, its harder to make sure it works nice in all the various edge cases and visual flair that is required for end users, and keeping it all consistent and upgradable across the board.

React (and I'd wager other component based approaches) shine there - its low level enough that you can implement anything with it, but still allow you to build complex things fast. And then are free to implement the actual business logic of the backend with the technologies of your choice. Even better various teams can do that in their preferred language, without the end user being affected by the difference.

Throw in react native to make the styles transferrable to mobile apps and you have the perfect sweet spot for largish tech companies. And since those tend to be the more vocal ones, we get this, in my opinion largely deserved desire to build SPAs.

Smaller teams / startups of coarse don't have those incentives, so they will understandably not have the same cost / benefit analysis.


The problem with SPAs is that they have been misused and overused, at least in Italy. Tons of projects built with SPA in which we have to re-implement basic browser features like the back button, just because project manager has no idea of how this technology works. I've worked on at least 4 projects which could be written in Nextjs in half of the time


Hate to break it to you but Nextjs is for building SPAs.


This is way oversimplified. You can build a completely static website with NextJS. What I love about it is that you can do everything : SSR by default, SSG as an option, and awesome features like ISR etc.

However, the way NextJS implements SSR is really weird right now : if you use their <Link/> component, the pages props are actually fetched with an XHR request, then the page is rendered. I don't know why. It does feel like an SPA in the end.


Next.js recommends delivering static assets by default and is geared towards that.

> We recommend using Static Generation (with and without data) whenever possible because your page can be built once and served by CDN, which makes it much faster than having a server render the page on every request.

If you need dynamic content, they recommend server-side rendering, and lastly client-side rendering only if the page is unaffected by SEO and requires a lot of in-page updates.

https://nextjs.org/learn/basics/data-fetching/two-forms


I was going to say.... ironically NextJS is just an additional layer for creating static-rendered sites over an SPA framework. We have come full circle.


All of software engineering is a flat circle. All of it. On the backend side it's microservices vs. monoliths.


They don't know about frameworks in Italy?


I think SPAs were the best solution we had at the time and I enjoyed trying to build good UX with them. Yes, React started fresh and ended up driving me back to Rails because I'm just not smart enough. That said, I am really excited about Django w/HTMX, Rails w/ Hotwire, and Next and Remix with React. Particularly Remix.


I can get good at a lot of things, but most of them (including half the ones I’m good at) really just aren’t worth my time and energy.

Ain’t nobody got time for that is more often closer to the mark. But anything that requires hypervigilance is eventually going to make you look dumb. Just don’t let them put “human error” on the RCA. Yes, You screwed up, but We put you there in the first place.


It should be obvious why SPA/PWA frameworks were developed by the likes of facebook and Google: Offloading their content rendering to the client.

Instead of Google/Facebook CPU cycles being spent on rendering their content, it's now the client devices, while the Google/Facebook infrastructure is "just" serving the data.


First lead dev offered me advice for the rest of my career.

Most of what you are going to see people argue about are cyclical fads. A pendulum. We try A and it doesn’t work. So we try !A. And when people forget why we stopped doing A someone tries it again over and over.

How are things different this time should be your second question. Your first question is what is the middle ground? Boolean logic falls on its face in the real world. If 1 is bad that doesn’t mean 200 is better. If 200 was bad the solution is not 1. The best answer is probably three.


So I ran across a shit ton of your comments in this thread, like how you think, you should hit me up if you're ever hiring.


I'm sure the data that FB and Google are delivering is far more expensive than rendering HTML would be. I am skeptical that server side rendering was a big enough bottleneck for them for that to be their primary motivation. I'm also skeptical that SPA frameworks would have been widely adopted by developers all over the world if they were just some conspiratorial plot by tech companies to save on CPU cycles.


This is completely inaccurate. Every millisecond of load time affects conversion so much that they would trade huge amounts of CPU to speed up a page load.


Until you have a monopoly, at which point conversion rates don't matter or rather don't change despite terrible UX.


In my humble opinion, SPAs are not the fundamental mistakes. SPAs are applications. There's nothing wrong with having applications. The mistake was not maintaining a distinction between applications and content. The difference between a static document, a collection of hypertext documents, and a web application is very blurry. If it were less blurry, there would have been things we would have been able to do that we are not currently able to do.


SPAs were a good solution when page load/rendering times were longer. However with todays internet speeds, edge caching, HTTP3/Quic as well as todays chips and browser implementations page reloads are barely noticeable so you get all the goodness the browser gives you out of the box creating MPAs.

There are still some examples were SPAs are useful such as replacing complex desktop applications but 95% of web apps don't need it.


It was a big day for me when I realized the CSS perf tool had been removed from my browser because the clock resolution was no longer sufficient to make it work. Everything was <1ms.

Bloom filters for CSS were a huge deal, and the CSS load and apply times are one of several things SPAs were trying to amortize.


It's posts like these where I can feel the age of HN.

For many developers, especially in startups, SPAs are all they've ever known.

Now, I'm relatively young, but I'm old enough to know monolithic django before moving onto angular then onto react, and it is quite staggering what I'm capable of doing in SPAs as opposed to traditional web pages.

You mentioned Soundcloud and Youtube, but you'd be lying to yourself if you think the list comes even close to even stopping there.

How about fantasy drafts in DraftKings, Sleeper, or FanDuel?

How about chess like chess.com or lichess?

How about collab tools like Trello, Monday, or Asana?

I wouldn't even dream of building these types of applications in a traditional web app

The level of interactivity that is capable within SPAs makes applications like these seamless and snappy, and overall feel-good for the user.


What a total nonsense. I’m quite old in our industry standards and remember all this ”great and easy” way of working with backend templates and struggle to find it much easier than working with SPAs.

I also thank computer gods for SPAs each time I click in Github and wait for it junky slow interface to reload whole page.

Please save us from likes of ROR etc. that is slow and has awful user experience. Just try to use Basecamp. It’s so ugly, slow and free of any features at all that your eyes will bleed.

I personally am totally for ditching html and css from the browsers and just leave JS with some nice APIs (without DOM nonsense) and let us work with web apps like we would with desktop apps.

Web is not for documents any more. Just get over it.


"I personally am totally for ditching html and css from the browsers and just leave JS with some nice APIs (without DOM nonsense) and let us work with web apps like we would with desktop apps."

Have you tried using React Native For Web? The name is so ironic it almost seems like a joke, but I think the rationale is to simplify things as you are suggesting


Yeah it’s a nice idea, but it still has to mess with DOM and css in the end.

I just think that web apps are not web pages. And all this struggle with css quirks and all this power wasted on parsing DOM is not really needed. I know that if what you need is a blog, then html with css is enough. But building interactive apps with backend templates rendering ends up like Github (slow and with bad user experience), or it requires a lot of crazy jQuery hackery we have been doing in the old days. And I’ll pick building a React app any time I have a choice like that.


SPAs are great when you're actually making an app.

It's a little less great when you're just putting a browser in the browser so they can browse while they browse.

Even then sometimes it works when the content is more app like and you don't want to set up a bunch of js and websocket connections with every load.

Maybe we need some new easy to use APIs for JS that persists it's state between pages.


>SPA's suck! Here's my ad-hoc replacement architecture that you should totally just use!

Really though, this is a dead horse. SPAs are just a tradeoff like everything else in engineering. We trade the simplicity of a monolith for the ability to parcel out work efficiently between large disparate teams of frontend/backend engineers.


Most of that navigation stuff is long-solved - e.g. Angular's router will do a lot of that.

If you want to do a SPA without a modern library, then yes you will need to reimplement a bunch of stuff. But who would start from scratch? Unless you are doing something incredibly niche or tiny-tiny-tiny, just use Angular or React or whatever and don't worry about the minutiae.

Of course there will now be people screaming at their screen about how this attitude is everything that is wrong with the modern web etc. I am sure they'll get over it one day, or just go use gopher/Gemini if not.


I'm building business application at my job. Users navigate (filter, configure, visualize 3D models) huge lists of data (bill of material, product structures). SPAs are heaven sent. Delegating every filter or configuration action to the server would be insane. Same with server side paging. Another example is 3D viewing: When selecting a part in the viewer detailed data should be displayed. A round trip to the server for this? No, thank you.

I have to heavily disagree with this post. Which by the way lacks any in depth arguments.


A SPA is an Application and not a Web Site. With a SPA you can replace classical desktop or mobile applications, for example Gmail, Spotify or maybe Dropbox.

SPAs are good for that, because you just have to develop it once, instead of 5 or 10 times for all platforms.

SPAs are bad for traditional websites that deliver content.


I can't agree with this. That list of 9 complexities are well abstracted by the most common library or framework for SPAs. Having done a lot of SPA development I can say routing is not anywhere up on the list of trouble spots you do encounter. State management is another story.


When I was a FE developer building single page apps, I would hear these arguments and say "you just don't get it, man, SPAs are the future". Now that I don't get paid to work on SPAs, these and other downsides come into very stark relief, and it's the advantages which seem more dubious to me. I doubt I'll ever touch an SPA again, and I'm not mad about that.

Indeed: "It is difficult to get a man to understand something when his salary depends upon his not understanding it"


Everytime I see one of these binary "XYZ is good/bad" posts I come away thinking that either I don't know what I'm doing, or the author doesn't.

Having built relatively simple Django web applications, and having built and managed building complex SPAs with Angular/React and GraphQL backends, I have a hard time thinking of when I would ever willingly grab for serving up dynamic HTML ever again.

The mixing of concerns with something like Django vs just having a native JS application loaded and running in the browser pulling in data as needed once loaded is a world of difference.

Using straw man examples of people using a technology badly to claim the technology itself is a mistake is silly.

There's literal annals of examples of poor memory management in compiled software causing massive issues. Is allowing memory management to be handled directly by software engineers a mistake? In some cases, yes, people would be better off with garbage collection. But you'd have a revolt in embedded systems development if that was taken away.

SPAs create more surface area for inexperienced or poor developers to screw up the UX. But a talented team developing SPAs (which really is a misnomer as you're rarely not chunking and lazy loading things behind the scenes) will ALWAYS result in a better UX for users than dealing with the browsers loading individual pages, and very likely more manageable development too.

That said, as I've spent more time in development I've noticed there's a noticeable shortage of talented teams out there.


SPA's #1 draw back is that you have to rebuild browser functionality inside of the browser.

The browser natively handles internet connectivity, back / forward buttons, refresh buttons, URL management. Additionally, users have been trained to understand the responses, hit the refresh button, and use the back button. Browsers have gotten better at distinguishing between your internet being down and the website being down.

Good SPAs have to rebuild all of that functionality.


Use them where they make sense, and don't get religious about it.

I recently wrote a site to track Apple-silicon-native audio software. It works great as an MPA. It's fast, organized, and easy to use.

And the first thing someone offered as a suggestion was: "Why don't you use https://www.sanity.io/ for the content and nextjs or gatsbyjs for the front end?"

I can't even.


SPAs were not a mistake, SPAs were an attempt to make the front end work like front ends have ALWAYS worked.

Back before the web people made C++ and VB user interfaces that interact directly with the data source. SPAs are an attempt to make the web closer to that well known paradigm. POST REDIRECT REFRESH cycle was an insane programming paradigm that wasn't found in any previous edition of programming.


Not so. IBM mainframe systems (3270 terminal based) that ran most of commerce for a long time are basically the same paradigm as a web browser. Send a form to the smart terminal (GET), wait for the response with the field values (POST), send another form.

In fact there are adapters that literally turn these applications into websites by translating the forms into HTML. Commonly seen when you need to do something like change the beneficiaries on your health insurance.

“Client-server” paradigm came out in the 90s when PCs became powerful enough to run a “thick client” application, and was considered revolutionary.

In other words, the eternal cycle continues… :)


GUI application patterns predate the 90s (Mac, X11, Apollo Domain, Xerox Star, etc)


Of course yes, GUIs go all the way back to Sketchpad in 1963. The PARC work was more about putting files on a server than putting the application logic on a server. X11 put all of the application on a server (or as X11 calls it, a client…). Based on my bookshelf, the “client-server” paradigm of putting the UI part and a substantial portion of the related logic on the client, but with most of the business logic on the server, reached mainstream adoption in the 90s.


>Back before the web people made C++ and VB user interfaces that interact directly with the data source. SPAs are an attempt to make the web closer to that well known paradigm.

I wish more web developers were aware of just how damn good UI tooling was for native systems. Stuff like WinForms/WPF/Visual Studio and Cocoa/UIkit/Xcode on OSX. Perhaps we could get away from the insanity of rebuilding a select component for every new project.


> POST REDIRECT REFRESH cycle was an insane programming paradigm that wasn't found in any previous edition of programming

Actually I'm old enough to remember old mainframe CICS programming that was a lot closer to the form-based HTML server-side programming of old. When "the web" came out in the mid-90's, it was actually a "blast from the past".


Most SPAs don't interact directly with the datasource, but interact with a middle service. Also many older architecture do draw entire "pages" on the screen at once, rather than manipulating only individual components. (at least not on every interaction)


Clearly, we can't have an SPA front end performing an SQL query directly. The HTTP/GQL/REST services are the SPA equivalent of that data source (they are still a data source). The C++ and VB apps written back in the day were single user programs so yes that's a difference but not by much architecturally.

"Pages" are similarly in SPAs too. We usually have a router that helps the app decide what main "page" is on the screen.


I showed an interviewer a piece of something I built in a traditional page-per-view design, with some xmlhttprequest where appropriate, and a sprinkling of Vue where it added usability to some of the modal dialogs. Their only response was "why didn't you build it in React?".

I stopped interviewing for front-end work since then. The landscape changes frustratingly too often.


I think SPA tooling solves a composability issue that people didn't realize they had, and so SPAs are used in a lot of instances where MPAs make way more sense and would be more performant, because the developers want to utilize the composability of components. The fact that SSR is a feature for React and Vue makes me think a lot of projects were really just aching for composability, not interactivity and dynamic data features of SPAs.

A think a lot of backend frameworks offer some attempts at frontend composability on the serverside but it's usually not the same approach that SPA tooling uses. I would love to get people's thoughts on backend tooling that is attempting to solve composability in a serious way, not just tacked onto an existing MVC framework.


This. I'm keeping an eye on the newer server side renderings like Remix because most of the time what I want is to be able to write components and have them render to HTML, with a few remaining interactive.

If Web components had got this right we the developers could have been happy and using composable components a decade ago.


I agree, I'm am pretty disappointed by how web components turned out. I still use them actually, on personal projects. But that developer experience is absolutely not where it needed to be, then the cool kids showed up and stole their lunch.


But components in web apps aren't really new. That's what ASP.NET and JSF was all about, waaaay back when.


Having worked with ASP.NET and other templating engines way back when and now working with React, I can say that there is a world of difference between the ease of composability of ASP.NET templates and React components.

People really want composability and sometimes they need a tiny bit more interactivity than their server rendered stack offers without hacks (even if it's just more interactive forms)


The fundamental conclusion I reached is that all of the pain originates by having your state divided between client & server, and then being forced to synchronize the two.

Doesn't really matter what technique or framework is involved. The worst trip I've had is with angular 2, trying to put humpty dumpty back together again via tons of janky API calls.

The best solution I have used so far is probably Blazor. In server mode this gets you pretty damn close to 100% state lives on the server. I've had to write some javascript, but even then the abstractions for interop make it very clean throughout. I use eval to dynamically run methods against the client. This allows me to avoid having to serve any custom javascript source (aside from the blazor client js).


If it's an "application" there are no "pages" as that's an old (very old) and deprecated construct. Now if it's a "web site" with a collection of "web pages" - that's another matter. I've been building commercial SPAs now for over 20 years and I can say with much authority that users prefer "applications" over "web sites" for complex business management tasks.

I've also built some pretty huge static web sites (longwoodgardens.org for example) and it an absolute joy to do those as well. I would never recommend to such a client that they build an SPA.

So I'll state this opposing view - that "web sites" were a mistake when used for complex business applications.


To me one of the biggest distinctions here is the overuse of the term "App". I agree that a lot of websites written as an SPA were the wrong tool for the job. There are tons of examples besides the mentioned media websites where SPA is the way to go. Most, but not all, of the applications I work on feel like desktop class applications and the traditional server side rendered approach simply would not work. This includes when I was on the iCloud Web team working on the iCloud Drive App, the Mail App and some others behind icloud.com.

I think a lot of companies that default to SPA would benefit from a very quick traditional website. However, some of the applications that I've been fortunate enough to work on would never be possible.


It's funny how author reinvents caching on service workers in next daily tip instead of relying on browser caching.


It's seeming pretty fashionable these days to bash SPAs. I disagree that SPAs were mistake (or that it's often a mistake to reach for a SPA, even today) SPAs are now and have been a great tool to leverage when you want to build an application without needing to think about servers. Yes, there are tradeoffs. Yes, you should think about the thing you're building for. Yes, you should think about the skills of the developers working on it. That doesn't change the fact that SPAs can be great choice and not something to be perceived as a mistake.


If the choice was between building a traditional app from scratch, or building an SPA from scratch, then I think the article would have a solid leg to stand on: building an SPA from scratch does involve finding ways around things that browsers provide out-of-the-box.

However, pretty much no one writes SPAs from scratch, and there exist many very popular frameworks that handle the vast majority of the common SPA tradeoffs out-of-the-box, so that in reality, you don't really have to think about e.g. whether links are internal or external.

That's not to say that SPAs are always a good idea, or that they do everything well — the author's point about scrolling to the right place, especially when navigating backwards and forwards, is one thing that many SPAs don't get right, for example. And the rise of next.js and the like have shown that there are other benefits to hybrid approaches, as well, and there isn't a one-size-fits-all solution.

As for why SPAs have become so popular, I think this can be summed up pretty neatly: everything is code. There's something appealing to a software engineer about being able to control everything about how your app behaves, control when and how it loads or preloads data, how navigation works, etc. This comes with tradeoffs, you have to be careful about breaking user expectations and making things accessible, and if you go too far you can end up with a big mess ("just because you can make everything custom doesn't mean you should"), but that underlying freedom to make those choices is appealing.


No shit, Sherlock. Of course, all of this was obvious to any of us with more than 5 minutes of industry experience, alas this field seems super prone to reinventing the same things over and over again, making the same mistakes and never learning from the past.

Also, this article does not even mention the fact that SPAs are terrible for SEO. Of course, that's not an issue for YouTube, but it might be a huge issue for your clients, unless it's ok to not have any search traffic.


SEO for SPAs is a solved problem


It is definitely not solved for the vast majority of SPAs out there. Where it is solved, it is not solved for free or cheaply, as it would've been if these apps used server-rendering tech.


Obv there’s next Nuxt etc but Doesn’t Google’s crawler “render” JS now?


Kind of, but it is done separately from regular indexing, at a much lower volume and it is filled with loads of gotchas [1] that can make Googlebot skip your site.

In practice, what happens is Google spends their JS rendering resources on the top sites and will very rarely render a new site. Generally, assuming you stay clear of all the gotchas, Googlebot can eventually "see" your JS-rendered content but it will certainly take a lot longer to be indexed and, therefore, it will take longer to appear in search results. If you or your clients rely on organic search traffic for your site, then picking client rendering based tech is certainly a bad decision.

[1] https://developers.google.com/search/docs/advanced/javascrip...


The issue is you need some of both for an "optimal" solution.

Unfortunately, Javascript and the basic "controller" capabilities of a web browser just aren't up to the task.

Here are your "controller" options:

1) the server is the controller

2) the web page is the controller of itself (which sucks)

When what is probably needed is:

3) a controller that lives as an extension of the browser.

Frames, for all their issues, basically enabled #3. A controller frame could manipulate the view frames, and those frames were scoped/contained web page "processes". There were issues of course, if you hard-refreshed the overall page (and the "controller" frame) it would lose state, so I don't want to say that frames were a complete solution.

Client storage is half of the equation, and that was added. Good caching infrastructure, multiple asset delivery, connection multiplexing, that seems to be working so you don't need a "big bertha" first page. Standardized DOM capabilities and the like seem to be fairly well solved, and maybe the "Javascript problem" will be solved once ... crap the thing where you have an IR bytecode so you can code in anything in a browser... well that will be solved.

Frankly this needs to be solved. Apps and app stores are a pox upon the land, and the web browser can still save us from that.


"SPAs were a mistake"

Meanwhile essentially every major tech property develops SPAs and users enjoy them far more than traditional hypertext web round-trip-every-change pages


Big tech companies love wasting money and developing a SPA is a good way to do that.

Whether users prefers SPAs or not is debatable. Amazon, Github, and Aliexpress aren't SPAs because poor usability will cause users to move to competitors. If Hacker news turned into a SPA, I guarantee that it would suck big time.

SPA developers seem to be living in a tiny bubble oblivious of simple solutions to simple problems.


> Github

In what world is this not an SPA? I don't see page transitions hardly anywhere, except when you're switching from marketing material to repos maybe.

Additionally, the fact that GitLab (which is devops/issue tracking/wiki/etc.) ISN'T an SPA drives me crazy every day. I have to wait for so many rerenders that could just be seamless loads, it's actually insane to me how unoptimized of a user experience it is, especially when my happy path is super clear with respect to what content I load.

> SPA developers seem to be living in a tiny bubble oblivious of simple solutions to simple problems

I think it's hilarious who you consider to be living in a bubble, and what simplicity really means. Client side interactions need to be smooth. Web applications are consistently and increasingly presenting meaningful user workflows that can be made more resilient and responsive with SPAs. I develop tools for dataflow/workflow editing, scheduling, visualization, etc., and the idea that these things can be made to feel good without a dedicated client layer is truly an archaic attitude.


You seem to be mistaken about what SPAs are. Github is simply a traditional Rails app that uses Javascript to enhance client side interaction.

A ton of web apps provide data visualization features without being SPAs by simply using libraries like D3.

I don't know what kind of apps you're building but claiming that everyone needs to start using SPAs instead of traditional progressive enhancement with Javascript is not right.


> users enjoy them far more than traditional hypertext web round-trip-every-change pages

"Major tech" often monopolizes its market so it's not really fair to say users enjoy it when there aren't any alternatives.


I've been thinking about this a lot, and the one thing I love about SPAs is that everything the user can do from the front end. They can do from the API. The trend has essentially tricked tons of corporate apps into opening up APIs to their users. My screen scraping skills are starting to get rusty, but it's worth it.

I sometimes daydream about a framework that allows me to build a spa and a server side app at the same time using the same url scheme.


Here's the thing: so-called "JAM stack" apps are the cheapest to run, right? I realized this while I was thinking about how to create the cheapest site possible. Mainly, you want to minimize bandwidth. Well, SPAs are designed to do this. The site gets cached on the first hit, and runs usually JSON requests in the background after. So I think a big motivation for building SPAs is because they're cheap to host/run.


This article doesn't seem to be thought out properly. I mean the name already suggests what Single Page _Applications_ are good for. They are useful when you want to build an application in a browser and not just a regular website.

State is probably the main factor why you would like to do this. I can agree on that SPAs are used in some places where they don't fit (mostly content heavy sites). But they are not a mistake.


> Browsers give you a ton of stuff for free

True, but irrelevant. I have written an OS GUI, including full file system support, in the browser that performs faster than native OS GUIs. My biggest learning from this is talking about performance in a job interview is the fastest way to kill the interview. Performance, like security, requires trade offs that deviate from comfort. If the common developer is insecure comfort is the only thing that matters.

JS hiring expects the following: React components, 2-6 weeks of JS practice, the ability to use map/filter on arrays, and CSS. If you overshoot that you begin to enter unknown territory that intimidates other developers. The better you are the less employable you become as the further from the bell curve you drift.

If you are wondering why this is or how we got here the answer is trust, or the lack thereof. In software there is a long standing bias: if you can see it visually its worth less. I have been doing this work for more than 20 years and that bias predates my work, so its been around for a while. Bias is continuously reinforced by the lack of standards in software hiring and the inability of employers to train their employees.

This lack of trust means that employers do not trust their employees to perform original work and likewise the developers don't trust each other. This is why absolutely everything, I mean this literally, is a downloaded NPM package, because the developers will trust an anonymous stranger to write original code before they trust anyone they know. There is all kinds of excuses to qualify this, of which some are bizarre and nearly all lack any kind of merit.

This is why its irrelevant what the browsers provide. Its baked into the framework, an NPM package, or ignored.


For me, things like react and vue are DEDICATED to the front end. They are great for building UIs and have great ergonomics.

Backend frameworks that I LOVE (asp.net, laravel) have templating languages that feel like an afterthought, and passing data to them sucks (I usually have to redeclare what data they receive somewhere, or it’s entirely a dynamic paradigm) editor support is ok AT BEST… all of the sudden part of my app is dynamic c#, which c# isn’t good at.

My choice of backend should be absolutely divorced from my choice of front end. I can write an api in node, c#, go, etc and I don’t have to totally change how my front end works.

And splitting state between front end and backend (as mentioned throughout the comments here) is awful and error prone. There is a reason large scale apps aren’t using vanilla js or jquery and hand rolling state management etc.

It’s all about the size of the app, complexity, division of labor (I can hire a front end specialist who spends 70-80% of their time on front end)… the only goal isn’t JUST performance but if you do it right you get that too.


These types of posts always miss a common reason why an SPA might be desirable in the real world: cross-company collaboration.

The assumption made in these posts is that you have total control over your entire application stack. Infra, Backend, Frontend, Ops, etc. In the real world, often times different companies are collaborating to build a final product for a client.

I worked at an agency that built an SPA because the frontend and backend were built by two separate companies at the same time. We’d meet to agree on what the APIs would look like and what shape to expect in the data payloads. Once we had some sample data, we could build a UI against that data.

When the real API was ready, we just switched the URL in the app config and everything started working.

If we didn’t build that as an SPA, we wouldn’t have been able to make the deadline the client was aiming for.

In a perfect world, would this have made sense to build as a MPA? Probably. But in the real world, you sometimes have to build things given some non-ideal constraints.


I also notice how much longer it takes to build the same crap we built 15 years ago with just html/css/js, and no, projects these days are not more sophisticated. Same old form submissions, but with a monstrosity of a project and crazy rates of devs who wants to constantly upgrade/update/migrate so some newer JS framework :shrugged :evilsmile


Many projects are absolutely more complex than they were 15 years ago. The web is now a space for fully-interactive applications, not just interlinked documents and forms


I am comparing apples to apples (CRUD to CRUD), not some random websites.


Okay apples to apples then

This is Amazon in 2006 https://www.webdesignmuseum.org/timeline/amazon-2006

Now Amazon landing page has:

- Omni-search with autocomplete

- Dynamic language and region selection

- Quick order forms accessible by profile dropdown

- Many submenus embedded somewhere on the page

- Multiple carousels of content embedded throughout the page

Nothing like Amazon's current landing page existed on the web in 2006. Basically everything was just CRUD back then. Simple CRUD applications are becoming increasingly rare


yeah, most of them plugins are still jquery-compatible, so...


SPAs are ideal for web applications where users are manipulating application state in real time. There are many situations where this is applicable, you can't build something like e.g. Office on the web without an SPA, calling SPAs a mistake is trendy, but betrays an ignorance of the practical tradeoffs of an SPA.


I usually bite my tongue when these articles pop up weekly, but let’s ask ourselves a question: did Facebook, Apple, Amazon, Netflix, Google, etc all make a terrible engineering mistake? Or did this guy with a blog miss something?

I’ve recently worked on converting a large code base to an SPA and the improvements in DX, UX and performance are enormous. Even the old timers that resisted the project have all come around.

This doesn’t mean your blog should be an SPA. It shouldn’t. but reducing the valid use cases for them to Youtube is equally ignorant.

I generally agree they are being overly applied, but that’s not an indictment on the technology whatsoever. Fads in technology happen (“rewrite it in rust” is a recent one). The fact that many people are falling into this fad says nothing about Rust as a technology other than people love it enough to overuse it.


> did Facebook, Apple, Amazon, Netflix, Google, etc all make a terrible engineering mistake? Or did this guy with a blog miss something?

Today I opened the Music app on my Mac to cancel a subscription. It asked me to use Touch ID to authenticate (fair enough) but then immediately fell back to asking me for my Apple ID & password.

This is on a Mac that's been logged into iCloud for ages, has a working iCloud connection (exchanged some files through it today) and I installed an app from the App Store today (without being asked to reauth, suggesting my session is indeed still valid).

The fact that MacOS frequently asks for Apple ID credentials seems to be a common and widespread problem that I myself experienced for years (sadly this occurrence wasn't a surprise for me - if anything the initial Touch ID prompt was more surprising as I couldn't believe it would actually not ask me for my password this time), so I wouldn't use big names as a sign of quality - if anything it's the opposite, big names means lots of employees who might have their own reasons for doing certain things or preserving a status-quo that might benefit them & their careers at the expense of the user experience.


> did Facebook, Apple, Amazon, Netflix, Google, etc all make a terrible engineering mistake?

Is that so impossible? There are many other considerations that go into technology choices at these companies. There are trade-offs involved, and for companies with huge teams of developers the considerations need to be very different than for small–medium sized groups of developers.

I would argue that a smaller group of developers can focus much more on user experience and engineering efficiency, whereas a large company has a organisational scaling issues and a significant bureaucracy to support. At a large company, engineering considerations come second to very many things. It would actually be surprising if the trade-offs and choices those companies made were correct for other very different companies.


ANY modern site uses something like nextjs or any other static site generator. The q is then if you make the site isomorphic or not and how the need for dynamic content looks.

There are no other viable options today for large professional sites.

If you have the problems described in the article you probably are not using a web framework.


SPAs exist for a reason. Same with SSR. Developer ergonomics play a bigger role than you think. And if you think that the choice is either SPA or SSR then you haven't been paying attention to recent developments in the frontend world.

Websockets+SSR (PhoenixJS) attempts to store state on the server rather than the client; in that way you can still have stateful interactions with minimal JS. NextJS does an amazing job patching the bridge between SPA+SSR by having the same codebase run on the backend and the frontend. What I would like to see more of from this industry is less "grass is greener on the other side" and the unification of a single tool that allows for the cross-collaboration of SPA + SSR without having two distinct codebases that one has to context-switch between.


I think one of the main reason SPAs are overused is because it's hard in 2022 for your users to wait after clicking on a link. Users are now used to instantaneous, especially since they all have a smartphone with apps on it. We, as web developers, try to give the best UX to users.


I agree with you and I think there are some good SPAs. In my case what makes me angry is opening a presentation website and seeing a loader because someone decided to make that a SPA.


It is annoying to see titles such as this, since SPAs are most certainly not a mistake (the article itself points out that the Youtube SPA is not a mistake).

The truth is most devs struggle to implement SPAs well. To implement an SPA well, you need to understand browser APIs extremely well (such as the History API), and in my experience, most devs don't, hence the broken back buttons etc.

To me, there is nothing nicer than a well made SPA, and often, the best SPAs are faster and lighter than the desktop apps they replace. I believe they are the best way to deliver software in the current age, and the sandbox that the browser offers actually provides users with a more secure execution environment than traditional software delivery mechanisms.

I personally LOVE SPAs.


I am mostly on the SPA-hating camp. There are two issues that annoy me the most. They are not a framework's fault but some kind of "mental model" fault:

First. When doing SPAs, APIs tend to leak a lot more info than needed, opening the door to attacks not commonly seen on server-rendered pages.

Second. Applications' bundle tend to have way more than users needs, even UI's they should not have access due to security/permissions. Most developers are not even aware of this problem and don't know that sensible info/features/endpoints are leaking.

As I said before, those are not faults at the framework level, but the development style that I seed used when making SPAs always led to them.


My company wouldn't know it.

I'm lead on a large Angular 13 app used by tens of thousands of companies, it works and it's very fast.

I wouldn't call SPAs a mistake. I have 2+ decades in the industry and come from an ASP.Net / MVC background prior to Angular.


They were necessary from a perspective of "we need a way to develop a company's web presence in a way we can send kids to code camp to get the gist of." Of course they weren't a wonderful thing in terms of actually building sustainable infrastructure, but what's that to stand in the way of business's need to go to market yesterday?

Code camps don't have time or resources to educate their students so we got the reinvent-the-world half-solution that is the SPA. SPA frameworks won't ever go away but you'll get the chance eventually to go work on something better-architected than your run-of-the-mill NodeJS shitshow.


I don't think that's accurate. SPA's arise from the desire to put something like Slack in a web page. If bootcamps are responsible for that, I don't see how. Isn't it more likely that they meet the demand rather than create it? And bootcamps taught Rails long before they taught React.

And let's be honest, making a server rendered site with some traditional Python framework is in many ways just as easy or easier than making a SPA.


Most major frameworks do that. Barring some devs who navigate using onClick over a link despite having that option.

Outside of some lasting mistakes made a decade ago, is this still an issue?

On top of that, “SPA” is a misnomer, as many frameworks give you a hybrid anyway.


I don't know why routing is the sole focus of the article. In all my SPAs it has been a set up once and forget about it deal. No one is constantly reinventing the wheel there, the churn is in other areas like reactivity and state management.


well summarized.

its easy to make bad SPAs nowadays. imo more time needs to be spent in the tool/environment/framework/language selection and evaluation process.

also, many SPAs cant even handle basic forms correctly anymore, because developers learn more javascript than html5. like the author pointed out, rebuilding features in js that already exist, just because one is unaware of the fact that it exists or what the underlying design choices have been, often is a bad idea.

are there actually still people that care about noscript? i mean, i get that most people dont care and js is enabled by default in most browsers, but the no-js-web-world is also quite interesting.


Traditional dynamic web servers can solve a lot of the same problems that SPAs and event front-end frameworks like React solve, by just applying software engineering principles to the whole stack. That's what Novo Cantico[1] does, and a lot of what I've been trying to say in my blog posts[2] that explain Novo Cantico.

[1] https://github.com/sdegutis/Novo-Cantico [2] https://www.novocantico.org/blog


If not SPA, which technologies would you suggest for a web app similar to an IoT dashboard, with gauges and other UI elements that update every second? The server is written in ASP.NET Core.

I was considering Blazor server side but it does not seem to be very popular (compared to react, Vue, angular, etc) and I am not sure it's a good choice for the long term (MS likes to kill UI technologies). Approaches like htmx and unpoly do not seem to be appropriate if a large portion of UI needs to update frequently (feels similar to implementing the Blazor approach but inefficiently).


I love SPAs. When used correctly.

SPA stands for Single Page Application.

It makes sense when what you expect is an actual application. Tools that just happen to be running within a browser for convenience. The browser just happens to be delivery mechanism.

On the other hand if the user expects navigable, hyperlinked content, SPAs just get in the way of navigation and make users' lives miserable for no good reason.

I want to move forward, move backwards, make a bookmark, open multiple tabs. It is frustrating when the application makes it impossible when those would be completely reasonable operations on, say, news site.


Users do stupid things regardless of the implementation. The extra work to implement a SPA is worth it because you can swap material in and out on demand without constructing a complete new web page.

100% disagree with OP.


I think SPAs are awesome, there are just so many cases that SPAs can handle easily and more gracefully. From non-redundant input checks over interactive elements to optimizations for spotty networks or even offline usage.

Anyway, it's painful when a page won't load or it takes 10 seconds to load for no apparent reason. That's why profilers need to be used, loading indicators within the elements and also there's the trend towards frameworks with small footprints or even to go framework-less.


Using them for everything and everywhere as a default option might have been a mistake. SPAs in general are not a mistake (but generalised headlines do bring in the clicks so there's that).

As a user, when I'm using an "application", on the web or the desktop, the last thing I want is for the entire UI to be reloaded for me after every action. For applications it makes sense to be run on a single page, fetching only the data when it's required and that's it.


I'm still bullish on the potential for Web Assembly apps to overtake mobile binaries.

And instead of Single Page Apps, we'll have just plain Apps that run in browsers.


I would much rather program in any language that is compiled to web assembly rather than JavaScript.


Since I assume you know about it a bit more, what is realistically timeframe it could happen at best?


No idea on timeline. But I'd speculate Apple has no incentive to cannibalize their AppStore ecosystem and make Safari more than a web browser.

So Google/Chrome or Firefox are the only 2 players with enough incentive to make Wasm an Internet app standard.

Or possibly the next Wasm-first, cross platform browser will come along with a Wasm AppStore?


Unless you already have a SSR in place(or you're familiar with it, e.g. Django, Laravel, Rails,etc), I feel SPA(or MPA) is actually a decent choice albeit the learning curve. Separation of concerns is always good. With SPA frontend developers can develop the UI using restful or some web APIs provided by the backend, while backend engineers can advance the server side and keep the API consistent.


I had the pleasure (aka hell mode) of building a content site as a SPA because some non-technical manager kept pushing it (it was one of those companies).

It was an insane task to meet all the SEO requirements, we ended up doing server rendered pages, and keep in mind this was years ago so there was not a whole lot of support or tutorials on how to do it.

Oh well, I got paid to learn a bunch of new stuff so I can't complain :D


Yeah, I was gonna say SPAs are not really good for marketing type of sites. Doesn't work easily with SEO. I learned this the hard way, but there are other use cases for SPAs. Just don't use it for sites that require good SEO.

I am pushing for a rebuild of that site using Laravel and minimal amount of javascript, actually :D


That was the worst manager I've ever worked for. He also never pronounced my name right even after I told him, so he can go f** off. Still triggers me to this day :D

He pulled the SEO requirement at the last minute, since he would not invite the technical people to decision meetings.


If it makes you feel better, SPAs were a mistake when they were all the craze in Flash. Also way before that in Director.

In those two cases, they signified (or accelerated) the end of the useful lifespan of those languages.

SPAs, like frameworks, always make some things easier and some things harder. As long as you stay within the lines of what is easy, you are good. As soon as you stray outside those lines, you get punished.


SPAs are great, not a mistake. If you use the wrong tool for the job you can make look anything like a mistake. Sure, your two-page hobbyist blog won't benefit from being an SPA, having a well-partitioned Cassandra cluster holding your three-paragraph posts nor having a serverless autoscaling architecture, but that's how people like to build it.


I was writing comment explaining how we can use new browser API's to get rid of SPA for 30 minutes.

Accidentally pressed F5 and all my message text disappeared in void. The same way it happened 15 years ago, same now, browsers not improved.

That's why we do SPA, we want reusable text input component out of box which automatically saves user generated content in forms on typing.


Lame opinion, but I miss the (pre-license-change) ExtJS days

Unified library of tools. An actual GUI designer to design layouts in.

Now everything's just a mess


The argument appears to be that browsers give you a lot of things for free that SPAs need to recreate.

But that's not true in practice. In practice most SPAs are created on top of a handful of extremely well tested libraries/frameworks that do all the work for you that the browser would do in an MPA.

That's not really a good reason that SPAs are a mistake.


I think people have forgotten how easy it is to set up a traditional server-rendered site with Spring Boot or Django or whatever framework. These days I see people adding a SPA as the default starting approach, which adds an entirely separate additional tech stack & deployable to the mix when it's not usually clear why.


If you don’t start with an SPA, you are looked at funny. Like maybe if you were to choose COBOL for the backend code.


SPAs are just Flash 2.0, except it's not popular to hate on them like it was popular to hate on Flash websites.


My dev was asserting that a modest change to how a certain form control works would be "easier in React" (and this is on a piece of our application that's not even in any SPA right now). I marvel over the eagerness with which developers reach for React and friends nowadays.


> SPAs were a mistake. Tomorrow, I’ll show you how we can build MPAs that are just as performant as SPAs, with less complexity and fragility.

You could just show people Unpoly: https://demo.unpoly.com/


lol another developer who hasn’t built an enterprise web app that requires The be benefits of a spa… and hasn’t taken advantage of the best practices around redux state management.

Have fun with managing state in the server while building apps that work internationally and offline.


This is probably true if you're writing an SPA from scratch, but in reality much of that complexity is handled by libraries like Next.js. I also think it's much easier to learn a React-like library than to learn various server-side template frameworks..


This opinion gets reiterated over and over and it's still wrong no matter how many times you say it. SPAs are just apps.

I'm sorry people can't tell the difference of when you need a site and when you need an app. That doesn't make web apps a mistake.


But it is so strange, most SPA's are basically just power point presentations. Why did it go like that? It isn't like people look for programmers to replace their power point presentations or PDF's, but for some reason that is exactly what they do on the web.


At this point most articles like this or tools addressing the problem discussed here replace one local optima with another, I'm holding out for something radically different, maybe something like makepad or bevy like ECS with good support for UI.


I wonder how you would build an encrypted messenger like like Element (app.element.io) without SPA style.

I think it's not even possible, since you are required to process incoming messages client side and must not send any private keys to the server.


This is irrelevant in practice because the SPA's code is also loaded from the server - if the server is malicious it'll just serve you backdoored JS, unless you load from a separate domain and have the main server allow cross-origin requests.

If you want to defend against a malicious server you need to make sure your client doesn't load & execute code from said server - it needs to be distributed as a stand-alone application instead of in a browser.


> unless you load from a separate domain

Which is the case... app.element.io doesn't host a Matrix server. Servers are completely independent of that.


Also, the majority of websites on the web are landing pages, blogs, news websites or e-commerce sites, all of which are inherently multipage.

SPAs should be used creating "applications" (as stated by the "A" in SPA), not websites.


Luckily we don't have to use them. Go ahead and do a full page reload when the user navigates to another page. Go ahead and implement interactive elements using jQuery. It all still works just as well as it did in 2012.


I fully subscribe to this notion and hate it that our field is riddled with hyped technologies which everyone chases.

Just because something works for Google (Angular) or Facebook (React) doesn't mean it works for your SMB application.


>This is, in theory, supposed to result in web apps that feel as fast and snappy as native apps.

What a bizarre premise. We don't build SPAs as alternatives to native apps. We build them as alternatives to server rendered web apps.


For me, I mostly work on applications, the UI just happens to be in a browser. This is different from server rendered sites that may make more sense in some scenarios.

I happen to really like, React, Redux, Material UI, etc... ymmv.


Also, more SPA framework logic needs to become part of the web platform; APIs beyond pushState, service workers, etc. You have to integrate it into the browser to really remove the confusion and the boilerplate.


The worst is when you have to use a piece of tracking or support software in your product and it pulls in a whole SPA with its own react/angular/whatever even when nothing is shown...


If our whole industry is wrong about SPAs, and a whole generation is wrong about SPAs, what is the best practice then? Does anyone have a comprehensive resource on the right way to do things?


I just wanted to note that the example - YouTube - is not an SPA. A click on a video from the homepage loads a distinct page, along with user profiles and other areas of the site.


Not the reasons I expected

I don’t see Navigation as a main problem and it can be solved quite neatly, not that much more complex than a normal site. Was expecting something about bundle size etc.


Agreed. The blog doesn't seem like a very thought out examination, and I expect most of the HN replies are just debating the title.

For me, the biggest con of SPAs is lack of SEO.


I found that Blazor to be a right mix between server rendered pages and SPAs. The programming model is good and adding interaction is easy.

The only issue is with scaling.


While Web2 is moving to SSR, I think SPAs will be a huge part of Web3.

SPAs are perfect for decentralized infrastructure like IPFS and friends. Same goes for mobile apps.


What exactly from SPA is making them good for decentralized?


Can be put on IPFS without much hassle.


Nextjs is a great way to make multipage web apps in modern day. Imo it should simply be the default for most use cases


Controversial opinion: SPAs are great and also SPAs are an admission that Java applets were along the right lines.


I guess Blazor WASM is the elephant in the room. OOP as much as you want, js codebase required near to 0.


the frustrating thing is that i rarely find a page that doesnt take at least 20 seconds to load, even if i only want to see an 100x100 image. This happens no matter how fast computer i use and it 's despairing

The term "server-side rendering" gives me giggles every time


Anyone else encountered NG routing locking up upon return to the web app with Safari back button?


Seaside on Squeak (now Pharo), is still my happy place for web application development.


I'm still insisting that progressive enhancement was the way to go.


This article could of just said “use NextJS” and saved a lot of words


I have always wondered how people gain this self-confidence.


The web is a mistake.


Of course, but who would believe us if we told u sooner?


The article does give good examples of where SPA works.


htmx.org the one and only frontend revolution


webapps were a mistake.


no they weren't, stop with these clickbait bad takes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: