Hacker News new | past | comments | ask | show | jobs | submit login

Obligatory "depends". I'm aware of more than one company that uses postgres for huge datasets serving workflows with high throughput and importance.

It's truly a workhorse, it's comparatively a pleasure to work with, and its extensibility makes it useful in places some other DBs aren't suitable for.

But a relational DB isn't right for every workload.




> But a relational DB isn't right for every workload.

While sometimes true, I'll counter that it's more common that the application was not truly designed for a relational DB, and instead was designed for reading and storing JSON.


You can store JSON (since 9x) and JSONB in PostgresQL (since 9.4 back in 2014)


I'm very aware of this fact. The issue is that by definition, JSON (or any other non-scalar) cannot be in normalized form, and so what you end up with is devs using RDBMS as a K/V store that has a few joins thrown in.

JSON performance (or JSONB, it really doesn't matter) is abysmal compared to more traditional column types, especially in Postgres due to TOAST.

Properly normalized tables (ideally out to 5NF, but at least 3NF) are what RDBMS were designed to deal with, and are how you don't wind up with shit performance and referential integrity issues.


imo, the value in that is interop with relational data. If you're primarily working with json, it's probably best to use a document db.


Yes. I don't mind the occasional JSON[B] column when it makes sense, but it should not be used as an excuse to not design a good schema. In general, if you find yourself trying to fit a non-scalar value into RDBMS, you should reconsider the data model.


What's more common is domain dependent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: