You don't even need to look at any specific client or server implementation to realize why cache is a problem.
Fundamentally, it cannot work well because the whole premise of GraphQL is to allow clients to request as many different permutations of the data as they want.
It doesn't make sense to cache every unique permutation of the same underlying data because the number of cached responses would grow exponentially even with a very limited dataset.
From a security point of view, a client should be able to DoS a server more easily because they can just request a slightly different permutation of the dataset every time (same data, different query); each request gives the GraphQL server a lot of work to do and uses memory.
It's less work for the client to generate complex random GraphQL queries than it is for the server to generate valid responses.
>Fundamentally, it cannot work well because the whole premise of GraphQL is to allow clients to request as many different permutations of the data as they want.
What kind of client are we talking about here? If you run a web app then all your queries will be predefined in the code which means that the only variables are just that, variables. If you are talking about other apps consuming your GraphQL API then yes, they can request whatever permutation they want. But realistically, what client that consumes your API would keep changing the shape of their request all the time? And if they do then just revoke their access. Abusive clients is not a problem exclusive to GraphQL.
Requests are not equally probable, they'll follow some sort of power law distribution. Caching will work well enough in that situation: common queries will remain hot and rare queries will fall out of the cache. The net effect is faster.
SQL databases often have query caches for this reason.
I agree about DoS, but I figure that's going to require the usual defences around throttling. Ideally in the database, but most databases of any kind don't have workload partitioning.
I'm certainly no expert in this, but I have been working with it a lot recently while developing an app using graphql.
The solution in Apollo is to do caching in the client (and the server). This means that if you add something to a list of items on the client side, you're going to have to manually update the client cache or fetch it all fresh again. It is a pain to get used to this at first, but it turns out to not be that bad and I can see how it gives a lot of flexibility. As far as I can tell, there is no good 'built in' caching like this for rest based apps (ie: using localStorage), you're always re-inventing it.
The UX changes a bit in SPA's. User clicks on a tab and sees a loading section. When the user clicks away from the tab and then back, it is right there without having to reload. The data is pulled out of localStorage automatically by Apollo client. Makes things feel really snappy. If the user wants to see a new version of the data, they click the refresh button.