Hacker News new | past | comments | ask | show | jobs | submit login

> Postgres can handle tables of that size out of the box

This is definitely true, but I've seen migrations from other systems struggle to scale on Postgres because of decisions which worked better in a scale-out system, which doesn't do so well in PG.

A number of well meaning indexes, a very wide row to avoid joins and a large number of state update queries on a single column can murder postgres performance (update set last_visited_time= sort of madness - mutable/immutable column family classifications etc.)

There were scenarios where I'd have liked something like zHeap or Citus, to be part of the default system.

If something was originally conceived in postgres and the usage pattern matches how it does its internal IO, everything you said is absolutely true.

But a migration could hit snags in the system, which is what this post celebrates.

The "order by" query is a good example, where a bunch of other systems do a shared boundary variable from the TopK to the scanner to skip rows faster. Snowflake had a recent paper describing how they do input pruning mid-query off a TopK.




That’s not the fault of the DB, though, that’s bad schema design. Avoiding JOINs is rarely the correct approach.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: