Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Why do we walk away from Single Page Applications?
22 points by WolfOliver 10 months ago | hide | past | favorite | 26 comments
Before server side components, if I wanted a native app (iphone, android, ..) I had to rebuild my SPA, now with SSC/SSR/HTMLX I need to rebuild my backend as well?!?!? Just to save some microseconds?



The trends to and from SPA are driven by improvements in "edge" service to support dynamic content:

In the old days, we built dynamic web content with "AJAX". There was a lot of latency to for a request to reach your application server. It was also kind of hard to make the servers respond with low latency.

So we added more client-side scripting. We had things like script.aculo.us, and settled into jQuery. This improved the responsiveness of some aspects, but split the user's front-end experience across different systems, written in different languages, possibly developed by different people.

So the SPA unified things again. It maximized the part of the application delivered by the low-latency CDN. Only the essential dynamic data was transferred from the higher-latency back-end server. It fixed the cohesion of front-end development, by moving entirely into JavaScipt frameworks. So, done well, this improved responsiveness for the user of dynamic web content. On the other hand, maximizing the SPA resulted in a new problem: users get slow or inconsistent performance because of downloading a big JavaScript application. So this new kind of load time turns out to be pretty hard to optimize.

Now in 2023, it's easier to build low-latency back-end servers. We have various options of "edge compute". It's not too hard to put a copy of your application server in each region around the world. So it makes sense to run the application in the server again, not that it's distributed. Moreover, our web development frameworks are helping to keep the web application design cohesive.

So moving to SPA and now moving away from SPA are driven by the desire to have a low-latency experience for our users. Changes in the available infrastructure have changed the optimal balance of server-side and client-side execution of the application. And improvements in our development frameworks makes it easier to take advantage of the infrastructure improvements.


That's great when your data to render actually exists at the edge. Most the apps I have dealt with need to pull data from a centrally located database for almost every request. This edge setup would actually increase user latency whenever there are multiple DB queries per request (this is true of all requests if you have to query a DB to validate the user or their permission, although that can be avoided with a signed token approach).

There's some potential advantage in using the cloud provider network after the edge, but it's already a known pattern to stick just a network proxy at the edge to gain the advantage of sending the rest of the request over the cloud provider network rather than sending it over the internet.

Cloudflare has a query cache tool so that you can set a TTL on database read queries and keep them at the edge to avoid the DB latency [1]. I think though that you could develop a way to do that at an edge network proxy working at the request/response level rather than the DB level. But perhaps SSR is superior for apps that can show users stale data.

[1] https://developers.cloudflare.com/hyperdrive/learning/how-hy...


Nice summary.

Add to this the need for greater security for some elements of the application, which often necessitate server-side activity, and you have a recipe for the current web app SotA.


Hasn’t the need for interactive web apps with highly polished UX been equally if not more important?


You don't need to do anything. If you're rewriting based on trends and not need, you're just following fashion. Does your app work? Keep it as is.


In case of HTMX I would need to?


Why are you adding that to your system?


I find the combination of a component framework such as React with a fully fledged backend such as Laravel or Rails with Inertia.js is perfect. The best of both worlds.

All this server side components is just over engineering no one needs, and even if you think you need it the complexity and vendor lock in introduced is not a trade off worth to do in my opinion.


Why would you need to rebuild the Backend? A SPA usually consumes data from services, being it HTTP/JSON, gRPC or whatnot. You server-side components can consume the very same services with no changes in the backend.


That's how it used to be. Now you have server side rendering so the server also needs to do something other that return data: render pages


But those SSR frameworks dont require much work. Ie if you use Next.js, the amount of work to enable SSR is not much higher than developing as a strict SPA with CSR only.


Whats the benefit then? Having another process running so I can justify the Kubernetes overhead?


If it takes 5 API calls to populate your page, do you want to make 5 round-trip responses to where you user is, or 5 within the same datacenter and 1 to your user? If the answer is that it doesn’t make a difference, then you don’t need these tools.


I think that could be a good argument for a BFF, not necessarily for a SSR app. IMO the only times it makes sense to pre-render your app is when initial loading time and/or SEO are of utmost importance.


You can run the SSR server in the same process, and the main benefit is that the client is a lot thinner (only needs to swap in HTML, no need to render anything).

Depending on the server framework, it should be perfectly possible to e.g. have uniform routes for JSON data and HTML hypermedia, although you do need to be careful with churn in such cases; you don't want a website redesign to break your native app. The HTMX author has an essay describing a solution: https://htmx.org/essays/splitting-your-apis/. Essentially, just implement two APIs! Since you can do this in a single backend (e.g. Django project) the amount of duplicate code should remain fairly limited anyway.

Also, many modern SPAs are highly complex in order to do things that don't require any work in a hypermedia application. For example, if you want JS-disabled clients to be able to view your page, you need to run a copy of the client on the server, necessitating a full JS runtime there (bad if you're not using server-side JS already!). And then you also need complex hydration logic so that it gracefully transitions to a full SPA. Plus this solution sucks if you want to use something other than JS on the backend.

With hypermedia tools like HTMX it is generally way easier to achieve progressive enhancement (although it will still require some thought, see e.g. https://htmx.org/docs/#progressive_enhancement)


With SPA you only need one process (The API), the static SPA files should be placed on a CDN.


SSR is almost always slower than SPAs. It's forcing a waterfall on first page dispaly (you can't stream HTML of something that hasn't been fetched from the database yet), and generally SPAs fail mainly at SEO. If you're not e-commerce, you probably just don't need it.


Honestly? I believe that the new hype of Server Side Rendering (SSR) frameworks is only a matter of vendor lock-in: when you have SSR, you *NEED* edge rendering, which only a few vendors can provide today.

Single Page Applications (SPA) are totally fine. My blog is an SPA (https://kerkour.com) and has no problem being indexed by the major search engines.

It's actually way faster than most webapps using new shiny new SSR frameworks as I can cache with precision the different chunks/assets.

Finally, everything is served from a server than barely uses more than 50 MB of RAM even under high load. Last time I looked, Next apps needed around 500MB-1GB of RAM to serve only a few visitors.


Your web site is serving up static, pre-rendered content. Further, it's actually more SSR-ish than SPA -- each request for a blog post returns the new page, except it is encoded as a JSON object which your front-end must interpret and convert to HTML, instead of just returning the updated HTML fragment itself.


Why would you need edge rendering for ssr? What's wrong with a regular server?


>has no problem being indexed by the major search engines

Naively, I never really questioned that SSR was better for SEO; what about your app do you attribute its successful indexing to?


> Next apps needed around 500MB-1GB of RAM to serve only a few visitors.

Not sure why you’re getting downvoted - your comment seems to add valid points like this to the discussion.


There's tradeoffs with anything. Sending HTML over the wire has benefits you're dismissing like faster response times since responses are cacheable. (The benefit can easily exceed some microseconds, especially if we're measuring time-till-the-user-can-actually-do-something.)

SPAs also have tons of benefits. Local state is awesome for instant reactivity once the page is loaded. But you also introduced an API that you now need to maintain, that 9/10 times, will only ever be used on this SPA and no where else.

It's just acronyms, pick whatever works for you. There doesn't HAVE to be one "best practice" that covers all use cases.


Huh, why is being cacheable unique to HTML? What's stopping an SPA app from using a cached JSON response?


Hype.

Only apply for small use cases yet frameworks (especially next.js) want to sell you stuff.


I read a hype piece about “multi-page applications” and thought “so we’re back to making websites.”

SPAs seem like a good choice when the app should be able to preserve full functionality when offline.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: