(no title)
exekias | 2 years ago
- For rows inserted/updated through the old version of the schema, the up function gets executed, copying the resulting value into the new column - For rows inserted/updated through the new version, the down function gets executed, copying the resulting value into the old column
For instance, you can test that up function works before releasing the new app relying on it, just by checking that the values present in the new schema are correct.
jedberg|2 years ago
surjection|2 years ago
This migration[0] adds a CHECK constraint to a column.
When the migration is started, a new column with the constraint is created and values from the old column are backfilled using the 'up' SQL from the migration. The 'up' SQL rewrites values that don't meet the constraint so that they do.
The same 'up' SQL is used 'on the fly' as data is written to the old schema by applications - the 'up' SQL is used (as part of a trigger) to copy data into the new column, rewriting as necessary to ensure the constraint on the new column is met.
As the sibling comment makes clear, it is currently the migration author's responsibility to ensure that the 'up' SQL really does rewrite values so that they meet the constraint.
[0] - https://github.com/xataio/pgroll/blob/main/examples/22_add_c...
exekias|2 years ago
The good thing is that this will probably be detected during the start phase, as data would fail to be backfilled.
We are thinking of ideas to improve this in the longer term, for example: * Having a dry-run mode where you check that the data present in the DB will stay valid after applying the up function * Optionally allow for this to happen, but "quarantine" the affected rows so they are no longer available in the new schema.