It’s a bit unfortunate that no SSR-supporting libraries and frameworks in JavaScript seem to allow the final website to be built in a pure standards-compliant ES runtime. You have to bring in the entire Node or Deno because everything tends to be hard-coded to depend on the filesystem and other non-standard APIs.
The VC funded JS startups have no incentive to promote this, but you can still just call renderToString in any ES runtime to SSR your React components. There's definitely more boilerplate to deal with, but you could easily get away with doing SSR in a lightweight runtime like QuickJS.
agreed. I once played around with the idea of a framework that rendered server-side then immediately registered a service worker that would perform all subsequent renders on-device. It did work but there were enough rough edges with compatibility that I never got anywhere concrete with it.
I find this confusing. My understanding of web components is that they exist to encapsulate fragments of the DOM and use JavaScript to extend the functionality of elements. I don't understand how SSR could possibly help here, other than taking the first rendering pass out of the loop and generating standard HTML...which would then mean it's not a web component and just typical SSR.
> …which would then mean it's not a web component and just typical SSR.
SSR doesn't mean that you can't use web component frameworks, just that the framework must be able to both (1) produce and manipulate DOM in the browser as output, and (2) render the same components into HTML strings on the server. I believe frameworks with this capability are categorized as "isomorphic" frameworks.
Enhance WASM essentially makes Web Components an isomorphic component framework, with a bonus (for some) of not having to use a server-side JavaScript runtime.
That makes sense, though the first paragraph of that page talks about how web components depend on client-side javascript.
My confusion is that a web component by definition has to run client-side javascript. You're either (a) using javascript to port content from a template into your custom element, or (b) using javascript to attach a shadow dom to a custom element. I could be wrong about this, but I don't believe there's a way to send down a working web component from the server without accompanying client-side logic.
I had a skim of the Lit SSR docs and they didn't really spell it out either
My best guess at something that might make sense is if the idea is to pre-render something usable, but static, with SSR and then once that is delivered to the client it is able to upgrade itself back into an interactive client-side web component ?
No idea if that's what these are trying to do though
Unfortunately, it's not likely to help with FOUC. Flash of unstyled content happens when CSS isn't parsed and applied before the DOM is loaded and displayed. Even if one applies classes directly to DOM elements, if the stylesheet is still loading, you're going to see that flash. The only way to deal with it, is block rendering until the styles are parsed. (put it in the HEAD of the doc, etc)
You can, and that is effectively doing something similar to putting a style tag in the head of your document. It'll block until the styles are parsed and applied. If you have a ton of styles, your Lighthouse score will suffer because your first contentful paint will take longer.
Well yeah if you want to avoid a flash of unstyled content then you have to wait for the browser to style the content. The main problem is the style duplication issue which someone else pointed out elsewhere in this thread.
The browser cache doesn't help on the first visit when the cache is empty, and the cache will also have to be invalidated whenever the component's CSS changes. And if the CSS file isn't cached then you also have to contend with the flash of unstyled content problem while the browser loads the CSS file. The Lit docs do a good job of explaining this: https://lit.dev/docs/components/styles/#external-stylesheet
This means that if you have the multiple instances of the same web component on the page, you need to duplicate the whole style tag in each (right?), which seems unwholesome.
I'd never thought of this but you're totally correct. The only way to encapsulate styles within a shadow dom is to create that style tag (or use the CSSOM to generate a CSSStyleSheet), but either way it's the same outcome. The styles themselves are copied to each individual instance. Yuck.
This is a great point. If you have many instances of a web component and those components have a lot of styles, you're better off hoisting your styles into the document head and styling those components using the css `::part()` pseudoselector.
That avoids duplication at the cost of an extra network request. The browser has to make a full network round trip to download the html page, find your link element, and then make another network round trip to fetch the linked css file.
When the CSS file is served from the same host and preloaded, the same connection is reused and multiplexed in HTTP/2, and subsequent requests are being served from browser cache.
HTTP2 multiplexing is great for issuing multiple requests within the same HTTP connection, so that you don't have to deal with the overhead of establishing separate connections for each resource that's being fetched, but that doesn't solve the problem I'm talking about. Link preloading is great for having the browser fetch resources in the background before you need them, but if you're server side rendering your components you presumably need those components to be styled immediately (otherwise why are you server side rendering them in the first place), so that also doesn't solve the problem I'm talking about.
What I'm saying is that if you inline your styles in a style tag (either in the shadow DOM or in the head of the document) then the browser doesn't have to make any additional requests at all, because the styling information is present in the initial document already.
If you don't inline your styles, then the browser has to make two separate requests if the cache is empty: one to fetch the document and figure out which additional resources need to be loaded, and then an additional request to actually download those resources. There's no HTTP protocol shenanigans or link preloading that can change the fact that packets are being transmitted from the browser to the server and back again twice. It's basically another formulation of the N + 1 query problem. Once the CSS file has been cached you only have N requests instead of N + 1, but then the cache has to be invalidated every time you update your styles so you're back to square one again. The documentation for Lit explicitly recommends avoiding external stylesheets for web components, because it can lead to a flash of unstyled content while the browser makes an additional request to load the external styles: https://lit.dev/docs/components/styles/#external-stylesheet
External CSS in `<link rel="stylesheet">` has been used since the beginning of CSS and they are render-blocking by default (see https://web.dev/articles/critical-rendering-path/render-bloc...), so as long as CSS is small and on the same server, it won't cause FOUC. On the other hand, putting all the CSS inline in `<style>` makes HTML document download the same style on each reload instead of from cache. The best from both worlds is to embed a lightweight basic CSS stylesheet inline and the rest in cache-able external CSS files. What I am proposing is not to use JS at all, just pure CSS + pure HTML out-of-order streaming (e.g. https://github.com/niutech/phooos), which is very performant.
The introduction of the shadow DOM API changed the semantics of the link element. If the link element is placed inside of the shadow DOM, then it's not render blocking and you will experience a flash of unstyled content. That's what the lit docs are referring to. I've mentioned this because you originally joined this comment chain with a comment about putting link elements inside the shadow DOM.
If you place the link element inside the head of your document, then it is render blocking, which means the browser has to make two round trips to the server if the CSS file isn't in the cache before it can render (one to download the HTML file, and then another after it discovers your link element, and has to download the corresponding CSS file).
> The best from both worlds is to embed a lightweight basic CSS stylesheet inline and the rest in cache-able external CSS files.
This is the absolute optimal way of doing it. You would have to analyze your styles to see which styles are applied to elements above the fold, then extract them and put them in an inline style tag. The rest of the styles would have to be downloaded via a link tag, but you'd have to place the link tag at the very end of the HTML body tag to prevent the browser from blocking as soon as it encounters the link element or alternatively use JavaScript to add the link element after the page has been rendered. There are tools to automate this for static sites [1], but doing this for dynamically generated HTML is kind of a pain, and I've found that browsers parse CSS so quickly that the overhead of just inlining it all is very low in many cases.
By using client side rendering you’re effectively playing SEO on hard mode. It’s all possible, but you’re making life very difficult for yourself.
Google will crawl and render client side only sites, but the crawl budget will be reduced.
The bigger factor is that Google cares a lot about long clicks - clicks on results which don’t immediately produce another search or a return to the results page. Client side rendered sites almost always perform worse from the POV of the user and therefore convert at a lower rate.
And now Web Vitals includes things like Largest Contentful Paint and Interaction to Next Paint, you’re going to find it much harder to bring these metrics under the target thresholds.
If you want to perform well in search, make things easy for yourself: use mostly SSR HTML and CSS and some sprinkles of JS on top.
"Client side rendered sites almost always perform worse from the POV of the user and therefore convert at a lower rate."
I assume that people who create client side rendered sites disagree. Surely no one wants to make user performance worse, and I struggle to see advantages sufficient to offset that.
Generally very few people care, unless it’s "an impact metric" tied to that team. Or there is a "performance sprint" or something. "DX" has been more important than "UX" in the current mainstream JS community for a while. To give an example: babel, that enabled syntactic sugar, also caused many versions of runtime implementations of various ES6+ syntax to end up in bundles, or polyfills for browsers the clients aren’t actually using etc.
DX can't be at the cost of the user's priorities and needs.
Whatever is developed is for the user, not the developer.
If developers making their lives easier (or appearing to through more layers of abstraction in some cases) is more important than the user, the users will go where the best experience is.
Beyond things like developer products, End users overwhelmingly don't care what anything is coded in.
it’s not true. Many people are forced to use MS or Google products, people are using things for many reasons, like where their community is (Instagram, FB), or what they are forced to use by their employers. Or for example one tool has bad UX but is complaint in one way or another. It’s simply not the case that people "will go to where the best experience is".
I even admin and moderate FB groups and it is a constant struggle against the system - but that is where the community is, I have no choice. Other admins do not like it either, but it is where all the related groups are, and people are nervous of relying on solutions from volunteers (e.g. me setting up an old fashioned forum).
just check https://grumpy.website there’s lots of mainstream, popular apps/websites/tools that have plain broken UX and users have to work around that, but keep using them.
Cost is one of the most important criteria for end users; often much more important than speed. If something lowers the cost of development, that has a direct benefit to end users.
I believe it’s not true as well, UI specialists aren’t often working with backend templates and backend release cycles, and backenders generally have bad UI skills.
Either way, more work to generate the same HTML/CSS in the end creates a longer experience, one that's too often more brittle in the long run.
Where SEO is involved, simplest wins. There's a reason why Wordpress focused on making words easy to publish, organize, and connect, and from there try to be a CMS.
> I'm also curious why rendering something in the browser would have poor performance or wouldn't be optimally accessible?
To render something in the browser, you have to first send the browser javascript, which then needs to run. Server side rendering can usually be able to generate HTML faster than the javascript renderer can generate DOM. And the HTML itself is often smaller than the javascript needed to render the html. (And you can generate HTML from your server, not from your user's slow android phone via javascript).
Server side rendering almost always gets you a faster time-to-first-paint. Which is good for blogs and news websites. But if you need client side rendering anyway (eg you're writing Figma), then its not as big a deal.
Sending everything to the browser to crunch is a lot of downloading, and then available at the speed/mercy of the client computer, relative to the resources they have, and what they have acvailable (relative to other tabs).
Server side caching is pretty good these days and a valid option in more cases.
A lot of devs just started client side and the other side naturally seems bad or unknown. The time where this was an issue server side was when linux didn't network or perform so well with lots of traffic (and the cloud became popular). That's been largely solved.
I've also wondered this. I have single page client-rendered web apps that get crawled and indexed by Google just fine. The only problem I've run into is when sharing a link on something like WhatsApp, their crawlers don't get the OpenGraph metadata that gets rendered based on the page/link, so the share preview isn't as nice.
As others have mentioned, Google does client side rendering. What I was told by SEO people in my previous role (unverified otherwise) is that there's essentially a CPU budget per site, and while you can client-side render, it takes more of that CPU budget. For a small site this may be fine, for a large site with many pages it may prevent frequent re-crawls of the site. That said, I gather the budget is not the same for all sites, but varies by other factors. As I said, unverified, but it makes sense and seems like a good way to do it.
CPU Budget (and page budget) is dependent on website popularity.
You can see it clearly when you start a new website, google will index 10 pages at most, and as your website grow in popularity it allows 100->1000->10K etc...
The CPU budget, for sure, is used in google's ranking of websites because they know how long it will take a site to load, and that is a way to test the speed of a site.
Server side is generally lower cpu hit for content that is more SEO specific.
Only tangentially related, but I wonder if "indexability" is even relevant anymore. Google is getting close to useless for searching, and I would bet that all of the big tech companies are racing to write better crawlers to make sure they can suck up all the sweet free content for their next AI model, the incentive for making your site crawlable is inverted in an ironic way.
I feel it's much easier/better to concentrate on social media or sites like hn/reddit to increase your site's discoverability.
> I'm also curious why rendering something in the browser would have poor performance or wouldn't be optimally accessible?
That's really a tough one to answer since every site is different, but here's a few things to keep an eye on when doing CSR.
You don't own the hardware rendering your site. Slow devices will render pages more slowly and there's really nothing you can do about it beyond optimizing the rendering logic itself.
CSR will always add more network requests to the page. For a small site, or a user close to your servers this may not be an issue. Caching on CDNs will help for any static files too.
Any API requests that are required for rendering will likely introduce waterfall network requests. If the APIs live on your servers, that could have been drastically reduced or removed all together if the HTML rendering was handled by server in the first place.
For accessibility, I'm not aware of anything that can't be done accessibly with CSR, but there are things that can get tricky. Every accessibility tool is different. Making sure that content added and removed is updated properly, changes are notified to users when necessary, and that focus is handled properly are all up to you when you decide not to "use the platform" as they say.
It's hard to find concrete evidence for anything in SEO. However, from what I've seen (and is also shown by some big websites that index just fine) at least Google seems to do handle client side rendering just fine and has been doing so for ~5+ years.
For my part, I've had exactly the opposite experience. Without SSR or equivalent, you often have to wait weeks for your content to be indexed on Google, and the quality of the results varies. I'm talking about sites of at least several hundred/thousand pages.
Same. Lots of concrete info out there the more time one spends in SEO openly seeing how non-tech people achieve SEO, and most of the time (not surprisingly) Wordpress is server side.
Most websites, especially templates are extremely heavy in their assets for first time load.
The vitals bvetween the average client side compared to the average server side can be staggering.
For example, if you take a website that is using client side, on a headless cms or something vs Wordpress, wordpress has many, many plugins that obsess on optimizing the assets and delivery of the file beyond most client side implementations.
I love building myself a client side site, it feel so fast, until the need to communicate better or at scale increases and it is a reminder why the website should just be something that is good at being a website.
I don’t think this CPU budget actually exists… it’s certainly never been publicly mentioned as far as I can tell. I think it’s another one of those half truths floating around the SEO world.
Is there some concept of a budget regarding how many pages they should index on your website? Yes. Is it the same thing as CPU budget, no. Does it impact what position you rank in absolutely not.
Client side rendering is harder for SEO than server side.
Server side has many advantages.
If the key is delivering SEO information, client side dynamic UX might not be the core of that experience, and may not be the best application for that part of it.
If you wanna play in someone's network you gotta follow their arbitrary rules.
Google's? A bunch of random ass metrics.
Facebook's? Well, I guess you're making a Business Page.
Apple's? The list is so long.
JavaScript frameworks masquerade as SDKs and whatever, when really something like Next.js is a middleware with the same eventual product development complexity as Unity. Web developers think they have 2 browsers (Mobile Safari and Chrome). It's really like 10+ game consoles: Meta Instagram, Meta Facebook, Meta WhatsApp, TikTok iOS, TikTok Android US, TikTok Android Worldwide, Google Search destination, App Store iPhone, App Store iPad...
Interesting. I'm currently doing some work to enable React SSR in the Micronaut framework, which might be useful for some people working on the JVM. There's also https://elide.dev which is a polyglot server framework that also supports SSR.
I didn't fully understand what WASM is doing here though. Web components whether React or otherwise are written in Javascript or something that compiles to it, so you need a server side JS engine (Elide/my Micronaut work use GraalJS). Is WASM just being used here as an alternative to providing a .so/.dll file?