The GraphQL which ‘elegantly’ returns a 200 on errors? The GraphQL which ‘elegantly’ encodes idempotent reads as mutating POSTS? The GraphQL which ‘elegantly’ creates its own ad hoc JSON-but-not-JSON language?
The right approach, of course, is HTMX-style real REST (incidentally there needs to be a quick way to distinguish real REST from fake OpenAPI-style JSON-as-a-service). E.g., the article says: ‘your client should be able to request all data for a specific screen at once.’ Yes, of course: the way to request a page is to (wait for it, JavaScript kiddies): request a page.
The even better approach is to advance the state of the art beyond JavaScript, beyond HTML and beyond CSS. There is no good reason for these three to be completely separate syntaxes. Fortunately, there is already a good universal syntax for trees of data: S-expressions. The original article mentions SDUI as ‘essentially it’s just JSON endpoints that return UI trees’: in a sane web development model the UI trees would be S-expressions macro-expanded into SHTML.
N+1 is a solved problem at the framework level
If GraphQL actually affects your performance, congratulations, your application is EXTREMELY popular, more so than Facebook, and they use graphql. There are also persisted queries etc.
Not sure about caching, if anything, graphql offers a more granular level of caching so it can be reused even more?
The only issue I see with graphql is the tooling makes it much harder to get it started on a new project, but the recent projects such as gql.tada makes it much easier, though still could be easier.
We use the dataloader pattern (albeit an in-house Golang implementation) and it has solved all our N+1 problems.
E2E type safety in our case is handled by Typescript code generation. It works very well. I also happen to have to work in a NextJS codebase, which is the worst piece of technology I have ever had the displeasure of working with, and I don't really see any meaningful difference on a day to day basis between the type sharing in the NextJS codebase (where server/client is a very fuzzy boundary) and the other code base that just uses code generation and is a client only SPA.
For stitching we use Nautilus and I've never observed any issues with it. We had one outage because of some description that was updated in some dependency and that sucked but for the most part it just works. Our usage is probably relatively simple though.
on top of http level caching, you can do any type of caching (redis / fs / etc) just like a regular rest but at a granular level, for ex: user {comments(threadId: abc, page: 1, limit: 20) { body, postedAt} is requested and then cached, another request can come in thread(id: abc) {comments(page: 1, limit: 20) {body, postedAt} you can share the cache.
but of course, there is always the classic dataloader as well.
I am not saying that use graphql and all the problems will be solved, but i am saying that the problem that OP proposed has been solved in an arguably "better" way, as it does not tie the presentation (HTML) with the data for cases of multiplatform apps like web, or native apps.
That's the thing, this brings the benefits of GraphQL without requiring GraphQL (+Relay). This was one of the main drivers of RSC (afaik).
Obviously if you have a GraphQL backend, you could care less and the only benefit you'd get is reducing bundle size f.e. for content heavy static pages. But you'll lose client-side caching, so you can't have your cake and eat it too.
Compared to GraphQL, Server Components are a big step back: you have to do manually on the server what was given by default by GraphQL