Another example of how impressive Postgres is, well done.
I’m always amazed by how much faster Postgres gets with each release - unlike most other softwares that often get slower.
Many parallel processing improvements in 11 should keep this up. Array and Composite Domains, Stored Procedures, Awesome Window functions even better framing / exlusion.
Very few weakness vs Oracle now, apart from one glaring weakness - Materialized views - lack of auto incremental refresh, severely limits their usefulness. (SQL Server Mat views also fairly useless beacuse they have so many limitations).
Would love to see - NOT IN Foreign key constraint - I would imagine this would be quite easy to implement on the back of existing IS In Foreign Keys, would be incredibly useful.
Also why not increase the default identifier length from 63 to 127? I know can be done on first install(only then) so is a pain to do, I find long constraint names useful for understanding errors.
What I feel lacking in PG(though it can be easily managed by adding respective plugins) is the lack of unsigned int alike (from MySQL) AND UNIQUE columns be case-insensitive or an option of having case-insensitive unique columns while default be case-sensitive (As it is currently)
Hmm, something I'm not understanding from the description of the implementation is how it handles this sequence of operations:
1.) Start with a nullable column with some existing rows with null values.
2.) Make the column required with default=100. Postgres now marks atthasmissing as true and attmissingval as 100. Rows will be lazy-populated to 100, but aren't immediately modified.
3.) Make the column nullable again.
4.) Add some new null values.
5.) Make the column required with default=200.
At this point, the right behavior is that all originally-null rows from step #1 have value 100 and all rows added in step #4 have value 200, but that's not representable with the atthasmissing/attmissingval system. I guess maybe in steps 3 or 5 it needs to just bail out and do a full table rewrite?
I was thinking through this problem a while ago and one idea I had was that you could version your schema and have each row remember what the schema version was when the row was created. Then you could lazy-populate any null value by looking up the next default value for the column chronologically in schema history. But I guess maybe that's complex enough or would hurt common-case perf enough that Postgres realistically isn't ever going to implement it.
As others have said, this is about adding columns entirely to tables, not modifying their schema. However....
This should fail at step 2 (and again at step 5). Modifying the default value of a column never modifies existing rows of a column. DDL (e.g., ALTER TABLE) should never modify data. You need DML (e.g., UPDATE) to do that. Example:
CREATE TABLE product (
id SERIAL PRIMARY KEY NOT NULL,
quantity int NULL
);
INSERT INTO product (quantity) VALUES (NULL);
ALTER TABLE product ALTER COLUMN quantity SET DEFAULT 0;
ALTER TABLE product ALTER COLUMN quantity SET NOT NULL;
The second ALTER TABLE will fail with "Column 'quantity' contains null values."
SELECT * FROM product WHERE quantity IS NULL;
And that will still return values.
You'd need to run this before modifying the NULL constraint to NOT NULL:
UPDATE product SET quantity = 0 WHERE quantity IS NULL;
When you add or modify a constraint, the existing data must already conform to the new constraint or it will fail. When you modify a default value, you only affect future rows inserted into the table. A good RDBMS doesn't silently modify your data and errors when it tries to do something that you didn't explicitly tell it to do.
Remember that a column that doesn't have a default value specified actually has an implicit DEFAULT NULL default value (unless it's something special like SERIAL). That's how RDBMS know to throw an error when you don't specify a NOT NULL column. It tries to default the column to NULL and can't.
Thanks! I guess I had some gaps in my mental model of how postgres handles these things, this clears it up quite a bit!
Just to confirm/repeat back:
* When you say DDL operations (ALTER TABLE) will "never modify data", they're still allowed to add a column with a non-null default since that's not "modifying data" by your definition, it's picking the value to use when introducing new data. I shouldn't think of it as "set implicit null to 3", I should think of it as "set nothingness to 3", just like how it would "set nothingness to null" in normal column creation. But it would never be willing to "set null to 3" in a DDL operation.
* Postgres doesn't have an O(1) way (meaning: never does a table scan while holding a significant lock) to transition a nullable column to non-null, since it'll need to check the contents of the entire table. (Although maybe it could use an index?) I guess I was hoping that the default would help here, but really it only affects creation of new values.
* More generally speaking, Postgres's design direction is to put data integrity as a higher priority than scale. It'll do both if it can (like in the article), but there are still cases where schema changes might cause user disruption if done on a gigantic table. It could have been designed so that schema changes are always fast (like they tend to be in NoSQL systems), but that would come at the expense of other nice properties.
* Correct. Adding a column requires providing a value for that column, and it will generally use NULL or the default value of the column when doing so. It does that because it has to (a column must have a value even if it's NULL). The only other time you may see DDLs "changing" data is certain data type changes. For example, if you change an approximate numeric data type to a precise numeric data type (e.g., float vs decimal) or vice-versa or some other data type with different levels of precision (e.g., SQL Server datetime vs datetime2), but you may still get errors or warnings.
* Correct. Adding a constraint or unique index will always require scanning the data to complete the transaction to ensure that it's consistent with the new rules. It's safe to assume that modifying the schema of a table itself in essentially any way will always require an exclusive table lock to complete. Even creating an index will often lock a table. PostgreSQL does allow you to create some indexes without a lock with the CONCURRENTLY option [0] and other RDBMSs may have similar options, but that's an additional feature that you'll want to check in your specific implementation with their own caveats.
* Correct. The design of the relational model which traditional SQL RDBMSs implement as a whole is to focus on ACID [1] compliance and strict determinism. It's focused on being correct first and foremost, and things like performance and availability take a backseat to that (which is where NoSQL data stores step in with different focuses). You wouldn't want to have that kind of focus on a social media site where it's more important to be available first and who cares if a bit of data is lost or broken or out of date now and then, but a financial, government, or medical system must store data correctly and must correctly retrieve data that it claimed to store correctly. Accuracy is paramount in those systems. This is one reason why DBAs have fairly universal reputations as extremely cautious perfectionists (the other reason is that if you don't design your tables and normalize your data it becomes extremely difficult to work with down the line).
MySQL is somewhat notorious for breaking many of the above rules, especially prior to MySQL 5.0. That is half the reason why it tends to have such a poor reputation among DBAs (the other reason being that Oracle is a terrible vendor). MySQL used to allow you to store invalid dates, would silently modify or truncate data rather than error, etc. It's much, much better now, but reputations are hard to repair in a world of strict rules and deterministic functions.
The patch is about ALTER TABLE ADD COLUMN with non-null DEFAULT. Your step 1 does not have a default value so this patch will not affect the database behavior. Postgres will not do anything with atthasmissing or attmissingval on steps 2 and 5 because these are ALTER TABLE ALTER COLUMN DROP NOT NULL, which is out of scope of the discussed patch.
I see, thanks! I guess I was interpreting it as "this is the new way that default values work in Postgres", but really it's only specific schema changes that are affected.
I think the most impressive part of a well-maintained software project, to me, is the ability of the maintainers to keep patches organized and maintain coherent and comprehensive change logs. I can barely manage this in my own four-man team.
in a 4 man team the cost of maintaining program structure is enough to grind everything else to an halt. larger teams on larger projects have an incentive to keep the structure organized because without a clear codebase the project will grind to an halt and have the manpower to delegate maintenance tasks without killing the project speed.
Good to know it's not "just me" then. I seem to have found a balance between "structure that appeals to the business side" and "not placing annoying restrictions on my teammates".
You could certainly get the same effect without downtime prior to this. It just needed a few extra steps and some cooperation by the application: adding a null able column without default has always been instantaneous, so you would do that, then update all the rows that are missing the value.
Finally, you add the default to the schema which will now be in the millisecond range because all/most rows have a default.
And then you add the not null constraint which will also be in the ms range.
You can do all of this in a transaction too because postgres' Schema modifications are transactional.
This is how I have added countless columns with default values without downtime over the last few years.
This is incredibly slow updating a billion rows to give a value to a new column as a blanket update will still be run as an implicit transaction. As does the constraint check before the default is applied. It can also use a LOT of disk space to do this as well.
If you are doing this, pay attention to the amount of dead tuples being accumulated over batches. We had to vaccuum analyze after each batch at least on one occasion where the index stopped being used by the query planner after a certain threshold of dead tuples. Even though there was no table lock, the performance hit made the application unusable from long running queries.
The following query will show dead tuple count for the table: select tuple_count, dead_tuple_count from pgstattuple('table_name');
The article does address this, it still has to check the DB for NULLs after you're done. Less downtime, but for huge tables it does take a significant amount of time.
This optimization applies only for new columns with a stable default. Therefore a table scan is not required to check for NULLs, as either the default is NULL or not.
I was talking about the way to do it before. You could create a NULL-able column, add the default in batches, add the default/make it non-NULL-able. But, the last part took some time (not in the range of ms) for large tables.
you can create a temporary conditional index on that table, then the check will be instantaneous. There is a `create index concurrently` that won't cause locks on the table.
With this the re-check for nulls when adding the not null constraint will not take any time at all.
Sorry, I should have clarified “accidentally downtime”. This doesn’t make anything new possibly, but will definitely eliminate a common way to shoot one’s self in the foot.
No, another example of a feature that SQL server has had for a decade or more (in standard edition). Maybe you're thinking of online index creation?
There are lots of reasons to criticize MSSQL (ridiculous cost, poor syntax/language extensions, awful text search, painful process for adhoc (non ETL) imports, etc...) but my decade old 2008 edition is still superior to PG in many ways. Why can't we have honest comparisons?
I have personally witnessed bogus schemas percolating up into the Django models.py, because of problems related to this. Good to know that Postgres11 fixes that. The fact that PG continues to improve in substantial ways even still, is pretty impressive for such a venerable piece of software. Well done.
This might have been the last case (for me at least) of having to choose a sub-standard database structure because of a limitation of the RDBMS. Now all migrations can add NOT NULL DEFAULT columns with impunity! Even on 100GB tables! Not sure exactly how much faster this will be, though.
>>> and a gaping hole in Postgres’ operational story is filled
Are there any other holes one should be aware of ? I'm looking forward to migrate a database with ~100 tables many of which having ~100 millions rows and lot's of FK to Postgresql... The operational side of things is what I know the least. Any pointers ?
While 100 millions rows is not exactly big number for PostgreSQL, so you might be well without partitioning, but still.
> Despite the flexibility, this approach forces the planner to perform an exhaustive search and to check constraints on each partition to determine whether it should be present in the plan or not. Large amount of partitions may result in significant planning overhead.
> The pg_pathman module features partition managing functions and optimized planning mechanism which utilizes knowledge of the partitions' structure.
If OLTP the best advice is create a reasonable load tester and use it for all future schema migrations. This will find all unexpected table locks due to DDL commands like CREATE INDEX, ALTER TABLE etc.
We have tables with 5B rows and don't use partitioning (yet). I do recommend testing and optimising the vacuum settings for large or often updated tables.
We increased "vacuum_cost_limit" globally and decreased "autovacuum_vacuum_scale_factor" and "autovacuum_analyze_scale_factor" for large tables. Most of them by a factor 4~5x.