Some small teams (including some I work with) find it more clear to have 001,002 etc, and just have the convention to get in touch before any schema change, which is overall often a good idea anyway.
Some frameworks take care of this automatically. Mango has an excellent library called South that does exactly this. It keep tracks of migrations and it can automatically detect nearly all migrations. It's much easier than doing it all by hand.
Checking the schema and upgrade scripts into source control is extremely important, and this is how I have managed it in the past. (names like table.alter.12.sql). I regret not setting this type of versioning table up from the beginning, though - when you have multiple environments (dev, QA, staging, etc) it can get very confusing in a hurry as to which schema changes have been applied to which databases. With a versioning table and maybe even some deployment time scripts to sanity check versions between the release and the DB, mismatches become immediately apparent.
The team behind the iBatis OSS project (now known as MyBatis) has a good tool named "migrations" which does exactly this. The original OSS project is a SQL mapping library for .net and java but the tool is independent of platform/language/OS.
Each change is stored as a script which have names like "<timestamp>_<brief description>.sql", and the timestamps serve as IDs as well in the "changelog" table in the DB. Creating the changelog table is always the first change.
The "migrate" command of the tool then has options for "status", "up" "down" etc., and features to create scripts to go from v1 -> v2.
Another nice concept they provide to the mix is adding an "Undo" section to each script so that you can rollback changes and downgrade schemas just as easily as upgrading them.
Yes, This is what we usually did in a previous job. The only benefit of storing in the database is that the code can check schema version if that's a necessary component. I could see some value there.
Nowadays I'm primarly working with non-relational datastores where this seems to be much less of an issue. There we version the entities that we store instead of a schema and that is done entirely in source control as well.
There's no particular reason to keep a whole table for this, since all you really want to know is what version you're at.
We use a UDF called VersionNumber that returns an integer. When the continuous build processes a new database change script, that function gets modified to increment the value it returns. So you can always call VersionNumber() on any of our databases and know which version you're looking at. The builds actually use it to decide which scripts they need to run to get from where they are to where they need to be.
You already have all the commentary you need in source control and in the change scripts themselves. I don't see a reason to duplicate it in the database as well.
I'd argue that "all projects with a [relational] database [and an ORM]" should rather move to modern tooling instead of writing migrations by hand.
Rails is surprisingly anachronistic here, which probably stems from that awful conceptual separation between model and schema.
Most other platforms have semi-automatic schema evolution that usually works very well (datamapper automigrate, django south, hibernate SchemaUpdate, etc.).
If you don't have enormous amounts of data, one thing I've found helpful on my own projects is to make a hot backup of the database every time the schema changes, as well as make hot backups daily, and then when you check out a version of code you pair it with the latest hot backup.
(This is far easier when using SQLite, which has its own tradeoffs.)
Yes yes yes. In my experience in Enterprise Dev teams it seems to be way too common that people just don't version control their DB schema, or use tools to produce schema diffs between dev and prod to product upgrade scripts, all driven by using GUI's to create their tables and then exporting the DDL using a tool (eg TOAD if you are doing Oracle work).
A few years back, I built an installer in Ruby to apply 'plsql modules' to an Oracle database. This was a massive project with > 100 developers at this point and probably heading toward 1M lines of code.
My migrations table worked on modules, so there were a set of migrations per application area, but it was really just an extension of this idea (and I borrowed the idea from Rails too)!
Although the project is still pretty young, liquibase(http://www.liquibase.org/) is a solid opensource project for not only managing schema revisions, but inserting seed data as well as abstracting schema structure from DBMS.
Nathan also does a good job of applying submitted patches quickly.
I have small PHP/MySQL projects I develop across multiple home computers and the server. This is a problem that's sometimes a bit hampering. Files are source controlled through SVN. Any suggestions on a tool that would be lightweight enough to be worth my time using it.
In this case, the key corresponds to the filename containing the migration statements.
Further, consider if your company had three deployments of your product. One for bleeding edge testing, one for staging/QA, and live. You'd want the schema version to be consistent across all three schemas.
[+] [-] troels|15 years ago|reply
[+] [-] moe|15 years ago|reply
You may want to re-think that statement.
(and I'm not trolling, I use and like rails, but the database abstraction is amongst the weakest of all popular frameworks)
[+] [-] StavrosK|15 years ago|reply
[+] [-] thibaut_barrere|15 years ago|reply
Some small teams (including some I work with) find it more clear to have 001,002 etc, and just have the convention to get in touch before any schema change, which is overall often a good idea anyway.
[+] [-] neeleshs|15 years ago|reply
[+] [-] rmc|15 years ago|reply
[+] [-] rmc|15 years ago|reply
Danny you autocorrect on my Android
[+] [-] dbrannan|15 years ago|reply
I know the developer - she is awesome!
[+] [-] ewams|15 years ago|reply
[+] [-] StavrosK|15 years ago|reply
[+] [-] jules|15 years ago|reply
[+] [-] citricsquid|15 years ago|reply
[+] [-] tbrownaw|15 years ago|reply
[+] [-] checoivan|15 years ago|reply
And resist the temptation of hacking the schema directly in your servers, it ends being a larger effort :)
[+] [-] z2amiller|15 years ago|reply
[+] [-] brown9-2|15 years ago|reply
Each change is stored as a script which have names like "<timestamp>_<brief description>.sql", and the timestamps serve as IDs as well in the "changelog" table in the DB. Creating the changelog table is always the first change.
The "migrate" command of the tool then has options for "status", "up" "down" etc., and features to create scripts to go from v1 -> v2. Another nice concept they provide to the mix is adding an "Undo" section to each script so that you can rollback changes and downgrade schemas just as easily as upgrading them.
[+] [-] zaphar|15 years ago|reply
Nowadays I'm primarly working with non-relational datastores where this seems to be much less of an issue. There we version the entities that we store instead of a schema and that is done entirely in source control as well.
[+] [-] jasonkester|15 years ago|reply
We use a UDF called VersionNumber that returns an integer. When the continuous build processes a new database change script, that function gets modified to increment the value it returns. So you can always call VersionNumber() on any of our databases and know which version you're looking at. The builds actually use it to decide which scripts they need to run to get from where they are to where they need to be.
You already have all the commentary you need in source control and in the change scripts themselves. I don't see a reason to duplicate it in the database as well.
[+] [-] moe|15 years ago|reply
Rails is surprisingly anachronistic here, which probably stems from that awful conceptual separation between model and schema.
Most other platforms have semi-automatic schema evolution that usually works very well (datamapper automigrate, django south, hibernate SchemaUpdate, etc.).
[+] [-] lsb|15 years ago|reply
(This is far easier when using SQLite, which has its own tradeoffs.)
[+] [-] StavrosK|15 years ago|reply
[+] [-] GFischer|15 years ago|reply
http://news.ycombinator.com/item?id=1984609
as I don't have to google around to find out what south is (I haven't used Django yet)
[+] [-] fendale|15 years ago|reply
A few years back, I built an installer in Ruby to apply 'plsql modules' to an Oracle database. This was a massive project with > 100 developers at this point and probably heading toward 1M lines of code.
My migrations table worked on modules, so there were a set of migrations per application area, but it was really just an extension of this idea (and I borrowed the idea from Rails too)!
[+] [-] smarterchild|15 years ago|reply
[+] [-] rezaman|15 years ago|reply
Nathan also does a good job of applying submitted patches quickly.
[+] [-] quicksilver03|15 years ago|reply
DbDeploy http://dbdeploy.com/ DbMaintain http://www.dbmaintain.org/
[+] [-] shashivelur|15 years ago|reply
[+] [-] mise|15 years ago|reply
[+] [-] chopsueyar|15 years ago|reply
Here is an older article... http://www.davedevelopment.co.uk/2008/04/14/how-to-simple-da...
[+] [-] stepancheg|15 years ago|reply
[+] [-] terra_t|15 years ago|reply
[+] [-] BrandonM|15 years ago|reply
[+] [-] brandon|15 years ago|reply
Further, consider if your company had three deployments of your product. One for bleeding edge testing, one for staging/QA, and live. You'd want the schema version to be consistent across all three schemas.
[+] [-] jfb|15 years ago|reply