I built this last summer and thought it was fairly complete. And it turns out I was naive. 6 months later, I have resolved ~100 issues reported by the users.
Incredible tool, definitely going to get a working poc for this with my team. Curious how this can apply across many databases, say in a cluster, checking for discrepancies etc. Wondering if that is a supported feature?
Tried it out now with SQLite to see what it does with difficult migrations, like adding a foreign key constraint to an existing table. Looks like it just generates invalid migrations, eg:
I use https://github.com/ariga/atlas for this, migration based and schema base both has there good and bad side, I prefer both even in one project, schema based can make dev faster, eaiser, migration based make feel reliable.
Ariel from Atlas here. This setup is pretty common (declarative locally, versioned in real environments). Since migrations are auto-generated (by Atlas) on PRs, most developers never touch the versioned workflow directly.
Yeah, after looking more into sqldef and alternatives I stumbled on Atlas too and I like the explicit support for migration based flow for exactly the same reasons. I want to know exactly what kind of migration will be applied to my prod database beforehand.
For example add temporarily nullable column to a large table, deploy new code which starts writing to the new column, in background populate that column for existing rows in batches and finally alter column to be mandatory non-nullable.
Another example of non-trivial schema management case is to make schema change after new version rollout completes: simple migration at the start of the container can't do that.
It must be a solved problem, but I didn't see a good tool for it which would allow expressing these imperative changes in a declarative way which can be comitted and reviewed and tested along the app code. It is always bunch of adhoc ugly scripts on a side and some hand waving deployment instructions.
I tend to prefer to hand-roll schema migrations... but I use grate[1] for the most part. That said, I've created similar tooling for different scenarios.
Also means I can stop with my hobby project that was supposed to do the same. Wasn't far along and haven't worked on it in months anyway.
So I'll spend my time on another hobby project then that also solves something that is already solved 100 times over, but I don't like the other solutions (simple log monitoring including systemd and sending error emails if something is found).
Pleasantly surprised this post was about a neat little tool and not another migration manager. This always seemed like a weak spot in regular SQL. Wish it were declarative like Spanner DDL.
When using Postgres, I just try to keep my schema script idempotent and also work with any version of the DB. Like I start with a CREATE TABLE IF NOT EXISTS, then if I add a new col, it goes into there and also a separate ALTER. But at some point it gets cluttered and I delete the ALTERs once things are stable. Maybe something could hit the fan, I have to restore an old backup from before the last purge, and this tool helps me make it compatible quickly.
How does this compare to Entity Framework (for dotnet) migrations or sqitch/liquibase.
I get the declarative nature of this, but migration of schemas is not purely declarative, particular on large production databases. An ideal schema manager would understand the costs of particular migrations (perhaps by using the table stats and EXPLAIN) and apply that back to the migration strategies so that downtime is minimized/eliminated.
Adding or remove columns or indexes can trigger major database table scans and other problems, especially when partitioning conditions change.
The bigger problem is that data can be part of a migration. A diff is far too rudimentary.
If I split a Fullname into FirstName and LastName, a diff will only tell half of the story. In EF Core, you will adjust an Up and a Down generated method to make the change reversible, plus you deal with data transformation there.
So I would love to know how people handle that without an explicit notion of migrations.
We've rolled our own variant of this, which uses an XML file as input. Just easier to parse than SQL schema definitions. The tool compares the schema defined by the XML with the DB, and applies changes as needed.
We used Sybase SQLAnywhere, and a complication there was that if you had materialized views against a table, you had to drop and recreate that view when adding or removing columns in the table. And when you recreate it, you of course have to remember to recreate the indexes on that materialized view.
Tracking this in case you have multiple materialized views touching multiple tables became a bit tricky, and you don't want to do the "dumb" thing to just always drop and recreate, or per-column, since some of them might take an hour or so to recreate and reindex.
The tool has some built-in safeties like never dropping columns if it's missing (have to add explicit element for that in the XML), and only performs safe column definition changes, ie integer to varchar(50) is safe, integer to varchar(3) is not, etc.
It really made database migrations very painless for us, great since we've have hundreds of on-premise installations. Just modify the XML, and let the tool do its job.
And since its our tool, we can add functionality when we need to.
Looks like this is only useful for empty databases. Which severely limits possible use cases.
Schema management is only a small part of the problem, and I don’t think this tool handles data migrations. E.g. if I reshape a JSONB column into something more structured, I don’t think it would be able to handle that. Or if I drop a column the backwards migration it generates ADD COLUMN … NOT NULL, which is obviously unusable if the table has any data in it already.
Sadly, it doesn't handle dropping columns well in sqlite. Actually, it was news to me that DROP COLUMN has been added into sqlite somewhat recently, but certainly the version you'll get on most devices doesn't include it.
But anyway, from the example for sqlite, I added "x integer not null," to the original table, and was greeted with "-- Skipped: ALTER TABLE books DROP COLUMN x". Then ticked the "Enable DROP" and got the same, except with the line uncommented.
Such a shame, this is one thing that would make a lot of difference. FWIW, the standard way of doing this is creating a temporary table with the new data, dropping the original table, and recreating the new table, copying the data across and deleting the temporary. It's kind of a shame that it doesn't automate that, and this is one of those things that's incredibly easy to mess up if you're not careful, especially if there are constraints involved.
If it just does the easy stuff, you might as well just do it by hand.
MS' Sql Server Data Tools is such an abominable garbage fire that I have no interest in these kind of tools. Besides being a buggy mess, it's very often insufficient - you end up having to maintain migrations anyways because often you have to inject data into new tables or columns, or rename columns, etc.
> often you have to inject data into new tables or columns
No tool can help you with that, simply because this kind of data migration depends on your particular business logic that the tool has no way of knowing about.
While SQL Server Data Tools has its warts, it has been immensely useful for us in making sure every little detail gets handled during migration. That doesn't usually mean that it can do the entire migration itself - we do the manual adjustments to the base tables that SSDT cannot do on its own, and then let it handle the rest, which in our case is mostly about indexes, views, functions and stored procedures.
After all that, SSDT can compare the resulting database with the "desired" database, and reliably flag any differences, preventing schema drift.
I'm always in the market for new sql tooling, but I'm wondering what's the use case there?
Isn't it much quicker to write a one line migration vs copying the DDL, then adapting it to the desired state and then getting getting the migrations from that tool? Or am I misunderstanding something?
Be sure to look at the actual sqldef command-line tool, not the trivial copy-and-paste demo on their website. Declarative schema management is best used combined with a Git repo.
In the big picture, declarative schema management has lots of advantages around avoiding/solving schema drift, either between environments (staging vs prod) or between shards in a sharded setup (among thousands of shards, one had a master failure at an inopportune time).
It's also much more readable to have the "end state" in your repo at all times, rather than a sequence of ALTERs.
Looks pretty cool. Lately I've been using ChatGPT to generate SQLite schema migrations and it works shockingly well. I give it the original schema and new schema and it generates the statements including caveats and gotchas to watch out for.
is there anything (open source) similar to microsoft database project
but that would work for postgresql
i like the following about it
1. database schema is regular code
2. make schema change declaratively
3. packaging (.daspac) and deployement script
most open source tools , seem to be after the fact tools, that do diffs
ms db project, handle the code from the start in a declarative, source code managed way
This is great! It would be a cool feature to integrate with SQG. there you currently need to provide the sequence of migration steps to get the full schema.
How is this better than Liquibase? Isn’t there a risk of missing schema evolution steps when you only diff current vs target? (Because current can be v1 and target v3 and you might have lost column migration logic from v2)
Personally I've always called this style "declarative schema management" since the input declares the desired state, and the tool figures out how to transition the database to that state.
sqldef is really cool for supporting many database dialects. I'm the author of Skeema [1] which includes a lot of functionality that sqldef lacks, but at the cost of being 100% MySQL/MariaDB-specific. Some other DB-specific options in this space include Stripe's pg-schema-diff [2], results [3], stb-tester's migrator for sqlite [4], among many others over the years.
The more comprehensive solutions from ByteBase, Atlas, Liquibase, etc tend to support multiple databases and multiple paradigms.
And then over in Typescript ORM world, the migrators in Prisma and Drizzle support a "db push" declarative concept. (fwiw, I originated that paradigm; Prisma directly copied several aspects of `skeema push`, and then Drizzle copied Prisma. But ironically, if I ever complete my early-stage next-gen tool, it uses a different deployment paradigm.)
Anyone with real-world usage of this one vs atlas. I've used atlas but it seems to be moving away from FOSS (paywalled-fearures). Its also struggled with some migrations that were a bit more than not trivial (like trying to play changes in the wrong order)
tianzhou|24 days ago
I built this last summer and thought it was fairly complete. And it turns out I was naive. 6 months later, I have resolved ~100 issues reported by the users.
quantike|24 days ago
Nice work, really happy to have found this today.
ForHackernews|24 days ago
cjonas|24 days ago
canadiantim|24 days ago
lun3x|24 days ago
`ALTER TABLE books ADD CONSTRAINT fk_books_author FOREIGN KEY (author_id) REFERENCES authors (id)`
Which is not valid in SQLite (https://www.sqlite.org/lang_altertable.html)
Akronymus|24 days ago
wener|24 days ago
a8m|24 days ago
See: https://atlasgo.io/concepts/declarative-vs-versioned#combini..., and https://github.com/ariga/atlas-action?tab=readme-ov-file#ari...
Palmik|23 days ago
nopurpose|24 days ago
For example add temporarily nullable column to a large table, deploy new code which starts writing to the new column, in background populate that column for existing rows in batches and finally alter column to be mandatory non-nullable.
Another example of non-trivial schema management case is to make schema change after new version rollout completes: simple migration at the start of the container can't do that.
It must be a solved problem, but I didn't see a good tool for it which would allow expressing these imperative changes in a declarative way which can be comitted and reviewed and tested along the app code. It is always bunch of adhoc ugly scripts on a side and some hand waving deployment instructions.
tracker1|24 days ago
1. https://grate-devs.github.io/grate/
Pretty easy to setup/use in a dev environment as well... see docker-compose.yaml and run/dbup script.
https://github.com/tracker1/FastEndpoints-SqlJobQueues
unknown|24 days ago
[deleted]
panzi|24 days ago
Also means I can stop with my hobby project that was supposed to do the same. Wasn't far along and haven't worked on it in months anyway.
So I'll spend my time on another hobby project then that also solves something that is already solved 100 times over, but I don't like the other solutions (simple log monitoring including systemd and sending error emails if something is found).
morshu9001|24 days ago
When using Postgres, I just try to keep my schema script idempotent and also work with any version of the DB. Like I start with a CREATE TABLE IF NOT EXISTS, then if I add a new col, it goes into there and also a separate ALTER. But at some point it gets cluttered and I delete the ALTERs once things are stable. Maybe something could hit the fan, I have to restore an old backup from before the last purge, and this tool helps me make it compatible quickly.
rswail|24 days ago
I get the declarative nature of this, but migration of schemas is not purely declarative, particular on large production databases. An ideal schema manager would understand the costs of particular migrations (perhaps by using the table stats and EXPLAIN) and apply that back to the migration strategies so that downtime is minimized/eliminated.
Adding or remove columns or indexes can trigger major database table scans and other problems, especially when partitioning conditions change.
exceptione|24 days ago
If I split a Fullname into FirstName and LastName, a diff will only tell half of the story. In EF Core, you will adjust an Up and a Down generated method to make the change reversible, plus you deal with data transformation there.
So I would love to know how people handle that without an explicit notion of migrations.
magicalhippo|24 days ago
We used Sybase SQLAnywhere, and a complication there was that if you had materialized views against a table, you had to drop and recreate that view when adding or removing columns in the table. And when you recreate it, you of course have to remember to recreate the indexes on that materialized view.
Tracking this in case you have multiple materialized views touching multiple tables became a bit tricky, and you don't want to do the "dumb" thing to just always drop and recreate, or per-column, since some of them might take an hour or so to recreate and reindex.
The tool has some built-in safeties like never dropping columns if it's missing (have to add explicit element for that in the XML), and only performs safe column definition changes, ie integer to varchar(50) is safe, integer to varchar(3) is not, etc.
It really made database migrations very painless for us, great since we've have hundreds of on-premise installations. Just modify the XML, and let the tool do its job.
And since its our tool, we can add functionality when we need to.
drdaeman|24 days ago
Schema management is only a small part of the problem, and I don’t think this tool handles data migrations. E.g. if I reshape a JSONB column into something more structured, I don’t think it would be able to handle that. Or if I drop a column the backwards migration it generates ADD COLUMN … NOT NULL, which is obviously unusable if the table has any data in it already.
ralferoo|24 days ago
But anyway, from the example for sqlite, I added "x integer not null," to the original table, and was greeted with "-- Skipped: ALTER TABLE books DROP COLUMN x". Then ticked the "Enable DROP" and got the same, except with the line uncommented.
Such a shame, this is one thing that would make a lot of difference. FWIW, the standard way of doing this is creating a temporary table with the new data, dropping the original table, and recreating the new table, copying the data across and deleting the temporary. It's kind of a shame that it doesn't automate that, and this is one of those things that's incredibly easy to mess up if you're not careful, especially if there are constraints involved.
If it just does the easy stuff, you might as well just do it by hand.
waynenilsen|24 days ago
https://david.rothlis.net/declarative-schema-migration-for-s...
chmaynard|24 days ago
Pxtl|25 days ago
branko_d|24 days ago
No tool can help you with that, simply because this kind of data migration depends on your particular business logic that the tool has no way of knowing about.
While SQL Server Data Tools has its warts, it has been immensely useful for us in making sure every little detail gets handled during migration. That doesn't usually mean that it can do the entire migration itself - we do the manual adjustments to the base tables that SSDT cannot do on its own, and then let it handle the rest, which in our case is mostly about indexes, views, functions and stored procedures.
After all that, SSDT can compare the resulting database with the "desired" database, and reliably flag any differences, preventing schema drift.
dewey|25 days ago
Isn't it much quicker to write a one line migration vs copying the DDL, then adapting it to the desired state and then getting getting the migrations from that tool? Or am I misunderstanding something?
evanelias|25 days ago
In the big picture, declarative schema management has lots of advantages around avoiding/solving schema drift, either between environments (staging vs prod) or between shards in a sharded setup (among thousands of shards, one had a master failure at an inopportune time).
It's also much more readable to have the "end state" in your repo at all times, rather than a sequence of ALTERs.
There are a bunch of other advantages; I have an old post about this topic here: https://www.skeema.io/blog/2019/01/18/declarative/
It's also quite essential when maintaining nontrivial stored procedures. Doing that with imperative migrations is a gateway to hell. https://www.skeema.io/blog/2023/10/24/stored-proc-deployment...
nodesocket|25 days ago
systems|24 days ago
i like the following about it 1. database schema is regular code 2. make schema change declaratively 3. packaging (.daspac) and deployement script
most open source tools , seem to be after the fact tools, that do diffs ms db project, handle the code from the start in a declarative, source code managed way
anymouse123456|24 days ago
Found shmig and it’s been really fantastic.
https://github.com/mbucc/shmig
uwemaurer|24 days ago
https://github.com/sqg-dev/sqg/
froh42|24 days ago
e.g. CREATE TABLE books2 (LIKE book INCLUDING ALL);
So I assume it is not Progresql DDL it uses but "something close".
yearolinuxdsktp|24 days ago
davidkwast|25 days ago
evanelias|25 days ago
sqldef is really cool for supporting many database dialects. I'm the author of Skeema [1] which includes a lot of functionality that sqldef lacks, but at the cost of being 100% MySQL/MariaDB-specific. Some other DB-specific options in this space include Stripe's pg-schema-diff [2], results [3], stb-tester's migrator for sqlite [4], among many others over the years.
The more comprehensive solutions from ByteBase, Atlas, Liquibase, etc tend to support multiple databases and multiple paradigms.
And then over in Typescript ORM world, the migrators in Prisma and Drizzle support a "db push" declarative concept. (fwiw, I originated that paradigm; Prisma directly copied several aspects of `skeema push`, and then Drizzle copied Prisma. But ironically, if I ever complete my early-stage next-gen tool, it uses a different deployment paradigm.)
[1] https://github.com/skeema/skeema/
[2] https://github.com/stripe/pg-schema-diff
[3] https://github.com/djrobstep/results
[4] https://david.rothlis.net/declarative-schema-migration-for-s...
chmaynard|24 days ago
DANmode|21 days ago
edoceo|25 days ago
stemchar|24 days ago
andrewg|24 days ago
You tell it what’s being renamed with a special comment.
canadiantim|24 days ago
gavinray|24 days ago