Migrations are JSON format as opposed to pure SQL for at least a couple of reasons:
1. The need to define up and down SQL scripts that are run to backfill a new column with values from an old column (eg when adding a constraint).
2. Each of the supported operation types is careful to sequence operations in such a way to avoid taking long-lived locks (eg, initially creating constraints as NOT VALID). A pure SQL solution would push this kind of responsibility onto migration authors.
A state-based approach to infer migrations based on schema diffs is out of scope for pgroll for now but could be something to consider in future.
Thanks for releasing this tool! I actually interpreted the question differently: Rather than manipulating in SQL, would you consider exposing it as something like a stored procedure? Could still take in JSON to describe the schema change, and would presumably execute multiple transactions under the hood. But this would mean I can invoke a migration from my existing code rather than needing something out-of-band, I can use PG’s existing authn/authz in a simple way, etc.
Migrations are JSON format as opposed to pure SQL for at least a couple of reasons:
1. The need to define up and down SQL scripts that are run to backfill a new column with values from an old column (eg when adding a constraint).
2. Each of the supported operation types is careful to sequence operations in such a way to avoid taking long-lived locks (eg, initially creating constraints as NOT VALID). A pure SQL solution would push this kind of responsibility onto migration authors.
A state-based approach to infer migrations based on schema diffs is out of scope for pgroll for now but could be something to consider in future.