1) Create a new column with the desired type (numeric[][] in your case)
2) Backfill it from the original one, executing the up function to do the casting and any required transformation
3) Install a trigger to execute the up function for every new insert/update happening in the old schema version
4) After complete, remove the old column, as it's no longer needed in the new version of the schema
Yes, for those pgroll migrations that require a new column + backfill, starting the migration can be expensive.
Backfills are done in fixed size batches to avoid long lived row locks, but the operation can still be expensive in terms of time and potentially I/O. Options to control the rate of backfilling could be a useful addition here but they aren't present yet.
1) Create a new column with the desired type (numeric[][] in your case) 2) Backfill it from the original one, executing the up function to do the casting and any required transformation 3) Install a trigger to execute the up function for every new insert/update happening in the old schema version 4) After complete, remove the old column, as it's no longer needed in the new version of the schema
Backfills are executed in batches, you can check how that works here: https://github.com/xataio/pgroll/blob/main/pkg/migrations/ba...
I don't think any of us has tested pgroll against timescaledb but I would love to know about the results if anyone does!