(no title)
exekias | 2 years ago
We believe there is a better way, they way pgroll works, you can start a migration, and keep the old & new schemas working for as long as you need to rollout your app. If the new version of the app/schema doesn't behave as you were expecting, you only need to rollback the commit and undo the migration. pgroll guarantees that the previous version is still working during the whole process.
There is a graph in the readme depicting this concept:
https://github.com/xataio/pgroll/blob/main/docs/img/migratio...
dot5xdev|2 years ago
If I delete a "last_name" column, apply the migration, and then decide I shouldn't have deleted users' last names. Do I get that data back?
indigo945|2 years ago
"Applying the migration" doesn't actually do anything to the table, it just creates a new schema containing views over the old one, where the view for the table whose column you deleted hides the column. You can then try if your app still works when using accessing this schema instead of the old one. If you're happy, you can "complete" the migration, at which point only the table structure actually gets altered in a non-reversible way.
unknown|2 years ago
[deleted]
pcthrowaway|2 years ago
Our workflow was something like:
1) Create the new hypertable
2) Create after insert trigger on first table to insert transformed data from first table into second table, and delete from first table (this ensured applications can continue running using first schema/table, without any new data being added to first table after migration)
3) Iterate over first table in time-bucketed batches using a plpgsql block to move chunks of data from first table to second table.
Would pgroll enable a similar workflow? I guess I'm curious if the way pgroll works would similarly create a trigger to allow apps to work with the initial schema as a stopgap... I guess pgroll would perform the whole migration as a series of column updates on a single table, but I'm unclear on whether it attempts to migrate all data in one step (potentially locking the table for longer periods?) while also allowing applications using the old schema to continue working so there is no downtime as changes are rolled out.
Has pgroll been tested with timescaledb at all?
exekias|2 years ago
1) Create a new column with the desired type (numeric[][] in your case) 2) Backfill it from the original one, executing the up function to do the casting and any required transformation 3) Install a trigger to execute the up function for every new insert/update happening in the old schema version 4) After complete, remove the old column, as it's no longer needed in the new version of the schema
Backfills are executed in batches, you can check how that works here: https://github.com/xataio/pgroll/blob/main/pkg/migrations/ba...
I don't think any of us has tested pgroll against timescaledb but I would love to know about the results if anyone does!
claytonjy|2 years ago
mst|2 years ago
(I very much appreciate the effort to provide tooling that puts all these things together, btw)
exekias|2 years ago
candiddevmike|2 years ago
surjection|2 years ago