Obligatory "depends". I'm aware of more than one company that uses postgres for huge datasets serving workflows with high throughput and importance.
It's truly a workhorse, it's comparatively a pleasure to work with, and its extensibility makes it useful in places some other DBs aren't suitable for.
But a relational DB isn't right for every workload.
> But a relational DB isn't right for every workload.
While sometimes true, I'll counter that it's more common that the application was not truly designed for a relational DB, and instead was designed for reading and storing JSON.
I'm very aware of this fact. The issue is that by definition, JSON (or any other non-scalar) cannot be in normalized form, and so what you end up with is devs using RDBMS as a K/V store that has a few joins thrown in.
JSON performance (or JSONB, it really doesn't matter) is abysmal compared to more traditional column types, especially in Postgres due to TOAST.
Properly normalized tables (ideally out to 5NF, but at least 3NF) are what RDBMS were designed to deal with, and are how you don't wind up with shit performance and referential integrity issues.
Yes. I don't mind the occasional JSON[B] column when it makes sense, but it should not be used as an excuse to not design a good schema. In general, if you find yourself trying to fit a non-scalar value into RDBMS, you should reconsider the data model.
It's truly a workhorse, it's comparatively a pleasure to work with, and its extensibility makes it useful in places some other DBs aren't suitable for.
But a relational DB isn't right for every workload.