Migrations are JSON format as opposed to pure SQL for at least a couple of reasons:
1. The need to define up and down SQL scripts that are run to backfill a new column with values from an old column (eg when adding a constraint).
2. Each of the supported operation types is careful to sequence operations in such a way to avoid taking long-lived locks (eg, initially creating constraints as NOT VALID). A pure SQL solution would push this kind of responsibility onto migration authors.
A state-based approach to infer migrations based on schema diffs is out of scope for pgroll for now but could be something to consider in future.
Thanks for releasing this tool! I actually interpreted the question differently: Rather than manipulating in SQL, would you consider exposing it as something like a stored procedure? Could still take in JSON to describe the schema change, and would presumably execute multiple transactions under the hood. But this would mean I can invoke a migration from my existing code rather than needing something out-of-band, I can use PGâs existing authn/authz in a simple way, etc.
> Also, what about if the user can just maintain one schema file (no migrations), and the lib figures out the change and applies it?
Because that only solves the DDL issues and not the DML. It is still useful though.
I use a schema comparison tool that does exactly this to assist in building my migration and rollback plans, but when simply comparing two schemas there is no way to tell the difference (for example) between a column rename and a drop column/add column. The tooling provides a great scaffold and saves a ton of time.
Also, what about if the user can just maintain one schema file (no migrations), and the lib figures out the change and applies it?