top | item 26890202

(no title)

tomhoule | 4 years ago

I can clarify one point, relative to migration rollbacks. We indeed chose not to implement down migrations as they are in most other migration tools. Down migrations are useful in two scenarios: in development, when you are iterating on a migration or switching branches, and when deploying, when something goes wrong.

- In development, we think we already have a better solution. Migrate will tell you when there is a discrepancy between your migrations and the actual schema of your dev database, and offer to resolve it for you.

- In production, currently, we will diagnose the problem for you. But indeed, rollbacks are manual: you use `migrate resolve` to mark the migration as rolled back or forward, but the action of rolling back is manual. So I would say it _is_ supported, not just as convenient and automated as the rest of the workflows. Down migrations are somewhat rare in real production scenarios, and we are looking into better ways to help users recover from failed migrations.

discuss

order

wdb|4 years ago

This sounds like you aren't able to easily rollback a release that had migrations

tomhoule|4 years ago

No question that there are cases where down migrations are a solution that works. In my experience though, these cases are more limited than you might think. There are a lot of presuppositions that need to hold for a down migration to "just work":

- The migration is reversible in the first place. It did not drop or irreversibly alter anything (table/column) the previous version of the application code was using.

- The up migration ran to the end, it did not fail at some step in the middle.

- The down migration actually works. Are your down migrations tested?

- You have a small enough data set that the rollback will not take hours/bring down your application by locking tables.

There are two major avenues for migration tools to be more helpful when a deployment fails:

- Give a short path to recovery that you can take without messing things up even more in a panic scenario

- Guide you towards patterns that can make deploying and recovering from bad deployment painless, i.e. forward-only thinking, expand-and-contract pattern, etc.

We're looking into how we can best help in these areas. It could very well mean we'll have down migrations (we're hearing users who want them, and we definitely want these concerns addressed).

abaldwin99|4 years ago

So future features notwithstanding, is the typical Prisma workflow that if a migration failed during a production deploy the developer would have to manually work out how to fix it while the application is down?

tomhoule|4 years ago

As of now there is no strong opinionation in the tool — you could absolutely maintain a set of down migrations to recover from bad deployments next to your regular migrations, and apply the relevant one (manually, admittedly) in case of problem.

tor291674|4 years ago

Take a backup before you do any prod migrations.

atleta|4 years ago

The first thing I've looked at on your site is how migrations work. Because honestly, I think that's one of the best things about Django. They just got it right, and as you say, not many other tools get close.

I wonder if you have looked at how it works. Because they have put in something like a decade to make it work and it's very powerful and a joy to use.

Down migrations are indeed very useful and important once you get used to it. First and foremost they give you a very strong confidence in changing your schema. The last time I told someone who I helped with django to "always write the reverse migration" was yesterday.

No way you can automatically resolve the discrepancies you can get with branched development. Partially because you can use migrations to migrate the data not just to update the schema. It's pretty simple as long as we're just thinking about adding a few tables or renaming columns. You just hammer the schema into whatever the expected format is according to the migrations on that branch. But even that can go wrong: what if I introduced a NOT NULL constraint on a column in one of the branches and I want to switch over? Say my migration did set a default value to deal with it. Hammering won't help here.

The thing is that doing the way Django does it is not that hard (assuming you want to write a migration engine anyway). Maybe you've already looked at it, but just for the record:

- they don't use SQL for the migration files, but python (would be Typescript in your case). This is what they generate. - the python files contain the schema change operations encoded as python objects (e.g. `RenameField` when a field gets renamed and thus the column has to be renamed too, etc.). - they generate the SQL to apply from these objects

Now since the migration files themselves are built of python objects representing the needed changes, it's easy for them to have both a forward and the backward migration for each operation. Now you could say that it doesn't allow for customization, but they have two special operations. One is for running arbitrary SQL (called RunSQL (takes two params: one string for the forward and one for the backward migration) and one is for arbitrary python code (called RunPython, takes two functions as arguments: one for the forward and one for the backward migration).

One would usually use RunSQL to do the tricky things that the migration tool can't (e.g. add db constraints not supported by the ORM) and RunPython to do data migrations (when you actually need to move data around due to a schema change). And thanks to the above architecture you can actually use the ORM in the migration files to do these data migrations. Of course, you can't just import your models from your code because they will have already evolved if you replay older migrations (e.g. to set up a new db or to run tests). But because the changes are encoded as python objects, they can be replayed in the memory and the actual state valid at the time of writing the migration can be reconstructed.

And when you are creating a new migration after changing your model you are actually comparing your model to the result of this in-memory replay and not the db. Which is great for a number of reasons.

tomhoule|4 years ago

Yep we looked at Django ORM as an inspiration. I unfortunately don't have the bandwidth right now to write a lengthy thoughtful response, but quickly on a few points:

- The replaying of the migrations history is exactly what we do, but not in-memory, rather directly on a temporary (shadow) database. It's a tradeoff, but it lets us be a lot more accurate and be more accurate, since we know exactly what migrations will do, rather than guessing from an abstract representation outside of the db.

- I wrote a message on the why of no down migrations above. It's temporary — we want something at least as good, which may be just (optional) down migrations.

- The discrepancy resolving in our case is mainly about detecting that schemas don't match, and how, rather than actually migrating them (we recommend hard resets + seeding in a lot of cases in development), so data is not as much of an issue.