Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Kudos to the author for reevaluating his opinion and changing heart on a technology he admits to have championed before.

IMO GraphQL is a technological dead end in much the same way as Mongo is.

They were both conceived to solve a perceived problem with tools widely adopted at the time, but ended up with something which is even worse, while the tools they were trying to replace rapidly matured and improved.

Today OpenAPI REST and Postgres are rightfully considered as the defaults, you even have PostgREST combining them, while most of those who adopted Mongo or GraphQL have either long migrated or are stuck discussing migrations.



GraphQL by itself has a lot of issues, but Hasura is IMO a power tool. It gives you CRUD with a security model and a lot of bells and whistles out of the box, and paired with Apollo client on the front end it's pretty quick to set up and use. I still use random REST endpoints, and I'm not interested in federation, but as a quick way to get an app going it's great.


Same, with a shout out to PostGraphile as well. As an aside, I'm sorry but I roll my eyes every time I encounter data loaders and the N+1 problem it's meant to address but which really is a consequence of insistence on following the resolver execution model. GraphQL is a query language. Just compile it to SQL (or Cypher, or SPARQL, or whatever)...when that's possible.


People need to stop judging the viability of something based on how satisfying it feels to use it in a toy project. When time is money, you'll see what really works.

At least GraphQL supposedly works for Facebook, and I tried it out before deciding it wasn't a default. I never even bothered with MongoDB. I've had to repeatedly veto using both in projects, cause someone thought it'd be cool and thought that was a good enough reason. Finally it's not cool anymore, but there will be another thing.


GraphQL works when you have an army of engineers that are able to solve all the perf issues


Everything works when you have an army of top-flight engineers.


I can name plenty of engineering tasks an army of top engineers has failed at, to the point of negatively impacting the product.


Ok, "most things". ;)


Maybe. Can that one army fix GraphQL for all the teams in the company? Cause I've seen things work that way, but I've also seen tools that are pitched as "maintained in one place for everyone" but are actually a complexity burden on every single team, especially if/when its usage changes.


No idea why you bundle Mongo in there. I use Mongo in multiple production apps and I've never, ever looked back. Wouldn't even consider RDBMS's at all after my experience with Mongo unless I absolutely had to.


What kind of "production apps"? A todo list saas?

I swear, the older I am, the more convinced I am that people who don't use a RDBMS just don't work on complex systems. Period.


Weird. My experience has been it is the most complex and biggest applications where RDBMS breaks down and you start looking for more scalable options.


Spanner and competitors shows you can to both.


Curious, I hadn't heard that take on Mongo. Do you have a link to some more info on this.



It's a shame that this out of date/meme stuff continues to give MongoDB a bad rap. It's a great DB if you need to be flexible/move fast and avoid migration headaches (speaking first hand, this has dragged dev cycles quite a bit). Most startups/saas/web apps would benefit greatly from using MongoDB purely from a reduction of complexity standpoint.

The current version of MongoDB, imo, makes you super productive and scales without a ton of thinking. If you're working in Node.js, it's even more useful as the query language works just like a JS/JSON object so writing queries is super fast (compared to SQL where you have to spend a lot of mental cycles figuring out how to map object/array data).

I've found that denormalizing data (not even necessarily copying/duping data, but trying to centralize storage of it) when using MongoDB is the way to get the most value out of it. If you try to treat it like an RDB (which does work but can cause issues with complex queries), you'll run into headaches. If you just design stuff to be nested, though (and use the built-in APIs to query that nested data), it works incredibly well.


That is pretty funny, But that video is 11 years old. It can't still be like that? can it? Seems like people are down on Mongo in the last year, and I'm trying to catch up.


WiredTiger was kinda Mongo's InnoDB and has made "your data will actually still be there later" rather more true than it used to be.

I think the key thing is that people using MySQL were having trouble with deep data and found MongoDB's document oriented approach much easier, but these days people are tending to start with PostgreSQL, which can handle that nicely.

(MySQL/MariaDB are far better than they used to be as well, though I find most stuff I read online doesn't take advantage of that as much as it might)

There's also probably a factor of Mongo solving pain points people had when they switched to it, and there being lots of excitement around that, where today the same people have run into the pain points of Mongo often enough that it's no longer nearly so exciting a prospect.

I wouldn't honestly be surprised if we're now at a point where people are -more- negative about Mongo than it really deserves, and I say that as somebody who viscerally hated it on sight and would still rather avoid dealing with it myself it if at all possible.

(oh and MongoDB the -company- has always done their best to be a good corporate community citizen, sponsoring all sorts of cool things as a result, and while I think the license change was a shame I -still- think they're doing their best, just in an environment where they wouldn't have a best to try to do in the first place if they didn't avoid being killed by AWS)


> the license change was a shame ... avoid being killed by AWS

That sounds similar to Elastic's story. Did MongoDB go through that as well?


MongoDB were the first major player to do that (that I know of, at least) back in 2018 - see https://www.mongodb.com/legal/licensing/server-side-public-l... or hit google for a plethora of people being angry about it.


I have not saved any links to back this up.

It is just my personal observation formed from working with Mongo and migrating systems away from it.


Just a couple nitpicks

* openapi was basically nonexistent when GQL came out. It certainly wasn't "the tool they were trying to replace"

* Postgres and GQL are not in any way mutually exclusive

* Today, openapi is still tiny compared to GQL. At least as measured by StackOverflow question tags:

https://trends.stackoverflow.co/?tags=graphql,openapi,soap,m...


Stack Overflow trends isn't really a good metric, but setting that aside, you need to sum the time series for Swagger and OpenAPI. There's still plenty of people who call OpenAPI 3.0 "Swagger". And strictly speaking, Swagger is "OpenAPI 2.0".


Good point, thanks!


> IMO GraphQL is a technological dead end in much the same way as Mongo is.

Can you suggest alternatives to graph introspection and related UI tools like GraphiQL, and the subgraph federation systems?


OpenAPI has a handful of open source API explorers. The ones I’m familiar with are Swagger UI, Redoc, and RapiDoc.

OpenAPI 3.0 has this concept of remote references, which can be URLs to other OpenAPI specs hosted anywhere. https://swagger.io/docs/specification/using-ref/


It may not be an exact analog, but "swagger UI" for openapi is the best I've seen like GraphiQL. Example https://petstore.swagger.io/ . Not sure of other alternatives.

No analog for "subgraph federation systems", unless a load balancer will suffice.


What are the use-cases for graph introspection in any but a tiny fraction of cases?


Mongo is great if you want a distributed replicated log. Existing tools sorely lack. (Postgres and Kafka are broken by design.)


Curious as to why you think Kafka is broken by design?


1. No reliable way to delete already processed entries.

2. No reliable way to handle queue overflow.

Combine both and you are 100% guaranteed to have an incident. (I guess it keeps devops and sysadmins employed, though.)


I wouldn’t have really called these issues “broken by design”…

Rough edges sure. No reliable way to delete processed messages. Well, who’s to say they were processed? It’s a persistent queue, stuff sticks around by construction. Besides, this can be managed with tombstones and turning on compaction for that topic.

How would you want to “handle” queue overflow? You’ve either got storage space, or you don’t, this feels a bit like asking “how do I make my bounded queue unbounded”. You don’t, that’s an indicator you’re trying to hold it wrong.

The configs could be a bit easier to drive, but confusing and massive configs is pretty par for the course for Java apps ime.


> Well, who’s to say they were processed?

The queue, which should keep a reference count for messages.

> How would you want to “handle” queue overflow?

At the producer end, of course.

> You’ve either got storage space, or you don’t

Kafka assumes you have infinite storage space. That might be okay for toy projects or for the insane architectures you see in the enterprise space, but not for a serious project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: