Love SQLite - in general there are many challenges with a schema or database per tenant setup of any kind though. Consider the luxury of row-level security in a shared instance where your migration either works or rolls back. Not now! If you are doing a data migration and failed to account for some unexpected data, now you have people on different schema versions until you figure it out. Now, yes, if you are at sharding scale this may occur anyway, but consider that before you hit that point, a single database is easiest.
You will possibly want to combine the data for some reason in the future as well. Or, move ownership of resources atomically.
I'm not opposed to this setup at all and it does have its place. But we are running away from schema-per-tenant setup at warp speed at work. There are so many issues if you don't invest in it properly and I don't think many are prepared when they initially have the idea.
The funny thing is that about a decade ago, the app was born on a SQLite per tenant setup, then it moved to schema per tenant on Postgres, now it's finally moving to a single schema with RLS. So, the exact opposite progression.
I dont know - I have experience working with monster DBs in production and never again. Under large enough load every change becomes risky because you can’t test performance corner cases fully. Having a free-tier user take out your prod because they found a non-indexed code path is also classic
> If you are doing a data migration and failed to account for some unexpected data, now you have people on different schema versions until you figure it out.
That shouldn't be a big issue. Any service large/complex enough to care does the schema upgrades in phases, so it's 1. Make code future compatible. 2. Migrate data. 3. Remove old schema support.
So typically it should be safe to run between steps 1 and 2 for a long time. (Modulo new bugs of course) As an ops-y person I'm comfortable with the system running mid-migration as long as the steps as described are used.
> That shouldn't be a big issue. Any service large/complex enough to care does the schema upgrades in phases, so it's 1. Make code future compatible. 2. Migrate data. 3. Remove old schema support.
Exactly this, schema migrations should be an append, deprecate, drop operation over time.
> now you have people on different schema versions until you figure it out.
That can be a good thing if your product has say < 100 customers. As each might have different upgrade timelines and needs. I even know of business like this who do custom work for some so they essentially aren’t even running the same code (gasp).
> The funny thing is that about a decade ago, the app was born on a SQLite per tenant setup, then it moved to schema per tenant on Postgres, now it's finally moving to a single schema with RLS.
To be fair, RLS was not available yet a decade ago :) It appeared in PostgreSQL 9.5 in 2016.
> If you are doing a data migration and failed to account for some unexpected data, now you have people on different schema versions until you figure it out. Now, yes, if you are at sharding scale this may occur anyway, but consider that before you hit that point, a single database is easiest.
This can be accounted for and handled. Though if schema issues are enough of a scare I wonder if a documentdb style embedable database like a couch/pouchdb might make more sense.
You will possibly want to combine the data for some reason in the future as well. Or, move ownership of resources atomically.
I'm not opposed to this setup at all and it does have its place. But we are running away from schema-per-tenant setup at warp speed at work. There are so many issues if you don't invest in it properly and I don't think many are prepared when they initially have the idea.
The funny thing is that about a decade ago, the app was born on a SQLite per tenant setup, then it moved to schema per tenant on Postgres, now it's finally moving to a single schema with RLS. So, the exact opposite progression.