Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Couldn't disagree more. GraphQL encourages tight-coupling--the Frontend is allowed to send any possible query to the Backend, and the Backend needs to accommodate all possible permutations indefinitely and with good performance. This leads to far more fingerpointing/inefficiency in the log run, despite whatever illusion of short-term expediency it creates.

It is far better for the Backend to provide Frontend a contract--can do it with OpenAPI/Swagger--here are the endpoints, here are the allowed parameters, here is the response you will get--and we will make sure this narrowly defined scope works 100% of the time!




> It is far better for the Backend to provide Frontend a contract

It sure is better for the backend team, but the client teams will need to have countless meetings begging to establish/change a contract and always being told it will come in the next sprint (or the one after, or in Q3).

> This leads to far more fingerpointing/inefficiency in the log run, despite whatever illusion of short-term expediency it creates.

It is true it can cause these kind of problems, but they take far, far, far less time than mundane contract agreement conversations. Although having catastrophic failures is usually pretty dire when they do happen, but there are a lot of ways of mitigating as well like good monitoring and staggered deployments.

It is a tradeoff to be sure, there is no silver bullet.


trying to solve org problems with tech just creates more problems, allthewhile not actually solving the original problem.


This is what I wanted to say too. If your backend team is incapable of rapidly adding new endpoints for you, they probably are going to create a crappy graphql experience and not solve those problems either. So many frontend engineers on here saying that graphql solves the problem they had with backend engineers not being responsive or slow, but that is an org problem, not a technology problem.


At TableCheck, our frontend engineers started raising backend PRs for simple API stuff. If you use a framework like Rails, once you have the initial API structure sketched out, 80% of enhancements can be done by backend novices.


Yup. And the solution to that org problem is for the front engineers to slow down, and help out the "backend" engineers. The complexity issues faced by the back-end are only getting worse with time, the proper solution is not adding more complexity to the situation, but paying down the technical debt in your organization.

If your front-end engineers end up twiddling their thumbs (no bugs/hotfixes), perhaps there is time (and manpower) to try to design and build a "new" system that can cater to the new(er) needs.


GraphQL is the quintessential move fast and break things technology, I have worked in orgs and know other people who have done so in other orgs where getting time from other teams is really painful. It is usually caused by extreme pressure to deliver things.

What ends up happening is the clients doing work arounds to backend problems which creates even more technical debt


I never understood why this was such a big deal... "Hey, we need an endpoint to fetch a list of widgets so we can display them." "Okay." Is that so difficult? Maybe the real problem lies in poor planning and poor communication.


Not to mention the fact that GraphQL allows anyone messing with your API to also execute any query they want. Then you start getting into query whitelisting which adds a lot of complexity.


Most REST APIs I've seen in the wild just send the entire object back by default because they have no idea which fields the client needs. Most decent REST API implementations do support a field selection syntax in the query string, but it's rare that they'll generate properly typed clients for it. And of course OpenAPI has no concept of field selection, so it won't help you there either.

With my single WordPress project I found that WP GraphQL ran circles around the built-in WP REST API because it didn't try to pull in the dozens of extra custom fields that I didn't need. Not like it's hard to outdo anything built-in to WP tho...


> the Frontend is allowed to send any possible query to the Backend

It's really not, it's not exposing your whole DB or allowing random SQL queries.

> It is far better for the Backend to provide the Frontend a contract

GraphQL does this - it's just called the GraphQL "schema". It's not your entire database schema.


A GraphQL schema is a contract though.

And the REST API can still get hammered by the client - they could do an N + 1 query on their side. With GraphQL at least you can optimize this without adding a new endpoint.


Yes, GraphQL is a "contract" in the sense that a blank check is also a "contract".


You can whitelist queries in most systems though. In development mode allow them to run whatever query, and then lock it in to the whitelist for production. If that type of control is necessary.


Can you explain what you mean by this? The GraphQL API you expose allows only a certain schema. Sure, callers can craft a request that is slow because it's asking for too much, but

- Each individual thing available in the request should be no less timely to handle than it would via any other api

- Combining too many things together in a single call isn't a failing of the GraphQL endpoint, it's a failing of the caller; the same way it would be if they made multiple REST calls

Do you have an example of a call to a GraphQL API that would be a problem, that wouldn't be using some other approach?


Then we just come back full round trip to REST where the backend clearly defines what is allowed and what is returned. So using GraphQL it is unnecessary complicated to safeguard against a caller querying for all of the data and then some. For example the caller queries nested structures ad infinitum possibly even triggering a recursive loop that wakes up somebody at 3am.


But GraphQL doesn't allow for infinitely nested queries; the query itself has to include as much depth as it wants in the response.

> Then we just come back full round trip to REST

Except that GraphQL allows the back end to define the full set of fields that are available, and the front end can ask for some subset of that. This allows for less load; both on the network and on what the back end needs to fetch data for.

From a technical perspective, GraphQL is (effectively) just a REST API that allows the front end to specify which data it wants back.


You are correct and I agree with you. GraphQL can be used effectively like this and I've seen one example where GraphQL is used like this. New endpoint can be defined very quickly and it is essentially like a REST API with the possibility of the client specifying what data it wants back (as you described).

The other extreme end example is to expose by default the entire data model (PostGraphile) and then getting lost in the customisation and authorisation.


The problem is that the client team can - without notice - change their query patterns in a way that creates excess load when deployed.

When you use the "REST" / JSON-over-HTTP pattern which was more common in 2010, changes in query patterns necessarily involve the backend team, which means they are aware of the change & have an opportunity to get ahead of any performance impact.


My blocker on ever using GraphQL is generally if you've got enough data to need GraphQL you're hitting a database of some kind... and I do not generally hand direct query access to any client, not even other projects within the same organization, because I've spent far too much time in my life debugging slow queries. If even the author of a system can be surprised by missing indices and other things that cause slow queries, both due to initial design and due to changes to how the database decides to do things as things scale up, how can I offload the responsibility of knowing what queries will and will not complete in a reasonable period of time to the client? They get the power to run anything they want and I get the responsibility of having to make sure it all performs and nobody on either side has a contract for what is what?

I've never gotten a good answer to that question, so I've never even considered GraphQL in such systems where it may have made sense.

I can see it in something big like Jira or GitHub to talk to itself, so the backend & frontend teams can use it to decouple a bit, and then if something goes wrong with the performance they can pick up the pieces together as still effectively one team. But if that crosses a team boundary the communication costs go much higher and I'd rather just go through the usual "let's add this to the API" discussions with a discrete ask rather than "the query we decided to run today is slow, but we may run anything else any time we feel like it and that has to be fast too".


It seems there is a recent trend of using adapters that expose data stores over graphql automatically, which is kind of scary.

The graphql usage I'm used to works more or less the same as REST. You control the schema and the implementation, you control exactly how much data store access is allowed, etc. It's just like REST except the schema syntax is different.

The main advantage of GraphQL IMO is the nice introspection tools that frontend devs can use, i.e. GraphiQL and run queries from that UI. It's like going shopping in a nice supermarket.


Not particularly scary. For something like Hasura, resolvers are opt-in, not opt-out. So that should alleviate some of your concerns off the bat.

For Postgraphile, it leans more heavily on the database, which I prefer. Set up some row-level access policies along with table-level grant/revoke, and security tends to bubble up. There's no getting past a UI or middleware bug to get the data when the database itself is denying access to records. Pretty simple to unit test, and much more resistant to data leakage when the one-off automation script doesn't know the rules.

I also love that the table and column comments bubble up automatically as GraphiQL resolver documentation.

Agreed about the introspection tools. I can send a GraphiQL URL to most junior devs with little to no SQL experience, and they'll get data to their UI with less drama than with Swagger interfaces IMO. (Though Swagger tends to be pretty easy too compared to the bad old days.)


But as people noted, it's not the "can this get the data" unit testing that's a problem here. It's the performance issues.

> I can send a GraphiQL URL to most junior devs with little to no SQL experience, and they'll get data to their UI with less drama

But that's like giving direct (read) database access to someone that was taught the syntax of SQL but not the performance implications of the different types of queries. Sure, they can get the data they want; but the production server may fall over when someone hits the front end in a new way. Which is, I think, what a lot of people are talking about when they talk about GraphQL having load issues based on the front end changing their call.


Hasura and Postgraphile are quite performant. No complaints there.

And you can both put queries on an allow list, control max query depth, and/or throttle on query cost.


> put queries on an allow list, control max query depth, and/or throttle on query cost.

All of which are features which give you some way to respond to the performance issues you avoid by planning your API up-front.


None of those are automatic. Any API you write still has to remember to put in LIMIT clauses, to avoid OFFSET. And then you have to actually write the API rather than have it generated for you.

There are no free lunches, especially with regard to security and sanity checks.


> It seems there is a recent trend of using adapters that expose data stores over graphql automatically, which is kind of scary.

I think that's the part where I have a disconnect. To me, both REST and GraphQL likely need to hit the database to get their data, and I would be writing the code that does that. Having the front end's call directly translated to database queries seems insane. The same would be true if you wrote a REST API that hit the database directly and took table/field names from query parameters; only... we don't do that because it would be stupid.


> changes in query patterns necessarily involve the backend team,

How does this follow? A client team can decide to e.g. put up a cross-sell shelf on a low-traffic page by calling a REST endpoint with tons of details and you have the same problem. I don't see the difference in any of these discussions, the only thing different is the schema syntax (graphql vs. openapi)


GQL is far more flexible wrt what kind of queries it can do (and yes, it can be constrained, but this flexibility is the whole point). Which means that turning a cheap query into an expensive one accidentally is very easy.

A hand-coded REST endpoint will give you a bunch of predefined options for filtering, sorting etc, and the dev who implements it will generally assume that all of those can be used, and write the backing query (and create indices) accordingly.


> and yes, it can be constrained, but this flexibility is the whole point

Flexibility within the constraints of what the back end can safely support.

To me, the "whole point" of GraphQL is to be able to have the client ask for only the data they need, rather than have a REST API that returns _everything_ and letting the front end throw away what they don't need (which incurs more load).

If you can't support returning certain configurations of data, then... don't.


And the client team using a REST API can do the exact same thing, by making more calls. There's no real difference between "more calls" and "same amount of calls, but requests more data".

That being said, it's a lot easier to setup caching for REST calls.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: