Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The "SQLite is just a file" thing is actually an advantage. The example of a website is actually a pretty poor one, since any website that needs to scale beyond a single box has many options. The two easiest ones are:

- Mix static and dynamic content generation (and let's face it, most websites are mostly static from a server perspective)

- Designate a writer node and use any of the multiple SQLite replication features

But, in short, if you use an ORM that supports both SQLite and Postgres you'll have the option to upgrade if your site brings in enough traffic. Which might never happen, and in that case you have a trivial backup strategy and no need to maintain, secure and tweak a database server.



You don't need to maintain, secure and tweak postgres any more than you would with SQLite. Just install it and it'll work. Postgres backup is a single command. And actually you're supposed to create sqlite backups with special command as well, if you're copying a file, you're doing it wrong.

I really don't see any cons with Postgres over SQLite for server applications.


> You don't need to maintain, secure and tweak postgres any more than you would with SQLite.

That's not true. Postgres is another standalone process, SQLite is a library. Even if you have your service and Postgres on the same box, you need to account for yet another process that can independently go down, that is competing for resources etc...


I've never had Postgres "go down". It might if you run out of disk space, but that is going to be a bad time with any database. It is not "competing for resources" when it is running the workload your app is sending it. You may as well say Sqlite is competing for resources in that case.


> You don't need to maintain, secure

Of course you need to maintain postgres.

Major version upgrades are not automatic, you can't just install a newer binary/library version and start it as you can for SQLite. You need to shut down the DB and run `pg_upgrade`, or write manual full export-import scripts with `pg_dump`/`pg_dumpall`/`pg_restore`/`psql`.

And good luck deciding between the different format options, as some of them are unsupported across some of these tools, some cannot export and reimport the full database cluster, there's no idempotent "just import this snapshot" operation (point-in-time restore), lack of progress reporting, etc.

Here are some notes I on the topic:

    # Note on Postgres backups
    #
    # Unfortunately, postgres backup+restore is not straightforward.
    #
    # * Backups created with `pg_dumpall`, which create an .sql file,
    #   cannot simply be used for point-in-time recovery.
    #   They need to be restored with `psql` (not `pg_restore`),
    #   which errors if the data already exists.
    # * You could probably tell it to ignore errors, but naturally it'll just
    #   run through `INSERT ...`, so it's not a proper point-in-time recovery,
    #   because it doesn't remove data newer than the backup as expected.
    # * To use `pg_restore` (which can ignore existing data, re-creating
    #   everything with the `--clean` flag), you need to use `pg_dump`
    #   (not `pg_dumpall`), which cannot backup *all* databases,
    #   only a single given one.
    # * Further, `pg_restore` does not accept `--format=plain` SQL backups
    #   (the default created by `pg_dump`). Only the non-plain backups are
    #   accepted, which are less readable for a human to determine whether
    #   a given backup is the one desired to restore based on the data.
    #
    # As a result, we aim for restoration using `pg_restore --clean`,
    # backing up only the `postgres` database using `pg_dump -d postgres`.
    # This works for us because we currently store all our tables in the
    # `postgres` database.
    # We use `--format=tar` because it is a plain text format, which
    # * deduplicates better than compressed formats, and
    # * allows a human to `grep` in plain text for desired contents.
Why isn't there a mode with which I can just tell postgres to migrate my data automatically upon startup with a newer version?

And why can't I just have postgres-as-a-library to link into my binary, like I can do with SQLite?

You also can't just run postgres as root (e.g. in a container), and have to set up UNIX users to work around that, because postgres has it hardcoded to avoid running as root. This, too, you don't need to do with SQLite.

Also, postgres is harder to secure.

You need to either use TCP and ensure that other UNIX users on the same system can't just connect, or use UNIX Domain Sockets which have a 108 char path length restriction [1] (which is of course not documented in postgres's docs [2]), so it will suddenly break your CI when its path changes from

    /var/lib/jenkins/workspace/my-branch-name/postgres/sockets/.s.PGSQL.5432
to

   /var/lib/jenkins/workspace/my-longer-branch-name-for-additional-cool-feature-12456/postgres/sockets/.s.PGSQL.5432
And then you need to tell people to use shorter branch name "because otherwise the DB doens't work".

Postgres is still my DB of choice, but it would be very misleading to say that it needs no maintenance and just works.

[1]: https://serverfault.com/questions/641347/check-if-a-path-exc...

[2]: https://www.postgresql.org/docs/current/runtime-config-conne...


> if you use an ORM that supports both SQLite and Postgres you'll have the option to upgrade if your site brings in enough traffic

I'll never understand this idea that Postgres and SQLite are somehow interchangeable when the time is right.

My database and Postgres are _literally_ the core definition of everything that my application does. My app is written in Rust, but that doesn't matter because it's a _Postgres_ application. I use Postgres-specific features extensively. Converting the application to SQLite would be essentially a re-write, and it would be worse in every way.

Also, I generally just don't understand this fad of running production backends on SQLite. SQLite is great for what it is, a tiny little embeddable client-side database. But it is a _terrible_ database for non trivial business applications where ref integrity, real types instead of "everything is a string", and battle-tested scaling is essential.


I don't think people often switch from Postgres to SQLite, it's probably more common (and much easier) to prototype with SQLite lite first and then switch.

If by referential integrity you just mean FK constraints, you can turn that on in sqlite3.

I think SQLite is pretty good for a lot of use cases. An Axum/sqlite CRUD app should be able to handle at least few hundred requests per second on a medium powered box, which is good enough for a lot of things.

Postgres is really powerful but I don't think it's actually that common to structure your app around it's unique features.


Lately I've been using sqlite in this way on small projects I'm just hacking at, but after seeing pglite this week (https://news.ycombinator.com/item?id=41224689) I'll probably give that a try next time


The fact that you use Postgres-specific features extensively is a design decision that many people would never make, regardless of their trust in the engine.


That's like saying I use Rust but I don't want to use any Rust specific features because I might want to port it to Python someday.

It's complete nonsense.


In the enterprise space, it isn't. You often have to build software that will run on different database back-ends. Just because your worldview doesn't align with other people's doesn't mean you own the truth...


IMO a downside of SQLite that isn't discussed as often as it should be is the poor support for some table operations like ALTER COLUMN. Need to change a column to null / not null? Drop a foreign key constraint? Tough luck, in some cases the only way to implement a change is recreating the table.


Even without an ORM that supports both, as long as the DB layer is reasonably separated in your application it shouldn't be too much effort to switch. And if you've scaled to the point where it matters, you probably have the resources to do so.


>> as long as the DB layer is reasonably separated in your application

I find this is easy in retrospect but tricky when you’re building a system. It’s all shades of grey when you’re building:

Should I put my queue in my DB and just avoid the whole 2PC drama (saga is a more apt word but too much opportunity for confusion in this context).

I probably should implement that check constraint or that trigger but should I add a plugin to my DB to offer better performance and correctness of special type X or just use a trigger for that too?

Should I create my own db plugin so that triggers can publish messages themselves without going through an app layer?

In retrospect it’s easy to see when you went too far, or not far enough. At decision time the design document your team are refining starts to head past the ~10 page sweet spot limit.


That's true, even with "perfect" abstractions, switching gets more complicated as you use more complex database features.

It's only really easy if you push most of your constraints and triggers to the application. In practice, I've only ever switched databases with really simple CRUD stuff and have otherwise been able to predict that I'll eventually want Postgres/RabbitMQ/etc and build it in from the start.


It can be an advantage or a disadvantage, depends on what you're doing.


Agreed. For some situations, it might well be easier to take advantage of the static nature of the site and use SQLite compared to setting up a Postgres server. For others, setting up a server could be easier than the “easy” options.


> The "SQLite is just a file" thing is actually an advantage.

It's more like besides the point. Everything in Linux is "just a file".


It can be a process and some unnamed sockets also, which is the main difference between PostgreSQL and SQLite.


SQLite is a library, the process still exists, it's just that it's part of your app now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: