top | item 8211575

Show HN: Stellar – Git for PostreSQL and MySQL

325 points| obsession | 11 years ago |github.com

77 comments

order
[+] robert_tweed|11 years ago|reply
Generally the hardest thing with version control on a database (for an evolving codebase) is separating unrelated changes - such as schema changes vs content updates - and branching and merging those changes in sync with the code dependencies. Another issue is non-destructively replaying development changes into test/production environments.

So for example, you might have a feature branch that includes some schema changes and some value modifications, and a content branch that includes a bunch of inserts into a few content tables that happen to include foreign key references to each other (so you need to maintain referential integrity when replaying those updates/inserts).

I don't see anything in the description that indicates this tool address those problems. For me, those are really the only problems that a DB version control system ought to be focused on. Speed of snapshotting is not all that important in a development environment as you typically work on a cut-down dataset anyway. A minute or so to take a snapshot a few times a day isn't a huge deal, whereas taking more frequent snapshots doesn't seem like something that adds any value, if it doesn't address any of the other problems.

[+] davesque|11 years ago|reply
"Another issue is non-destructively replaying development changes into test/production environments."

"...you need to maintain referential integrity when replaying those updates/inserts"

I think most of your misgivings about this tool are grounded in the expectation that it would provide merge features like Git. Since it claims to be "git for databases", I suppose that's a fair expectation.

However, the tool does provide the core functionality of Git, which is to implement a tracking system for hash-tagged objects (or, in this case, tables). This does have value. Not only is it faster than standard db dumps, it's also more space-efficient. For some people, this could be really valuable.

I think it's an interesting tool and could serve as a launching point for more powerful/useful functionality.

[+] abhishivsaxena|11 years ago|reply
I would disagree that referential integrity and schemas are the only issues a DB version control should focus on.

For example, I would be very interested in having git like branching on top of something like this - http://sandbox.substance.io/docs/lorem_ipsum.json. Basically version controlled schemaless object tree.

I recently implemented something like this, backed by mongodb, and by exposing a HTTP api which mimicked git. I had to relax these two requirements, but it is still worth it.

However I would be very much interested in using libgit2 with a database backend instead of filesystem.

EDIT: not affiliated with substance in any way.

[+] akurilin|11 years ago|reply
I'd love a tool that solves that problem. Schema / data / user / permission migrations, support for both production and development environments and integration with CM tools like ansible would be fantastic.
[+] mbrock|11 years ago|reply
I wish projects like these would always include some basic info in their README about: (1) how it works, and (2) how it might fail.
[+] danudey|11 years ago|reply
Seconded. The extent of what I know about this project is:

1. It's like git somehow?

2. I can run some commands which presumably do something?

3. That something happens faster than something postgres can do, which I assume accomplishes the same thing?

4. I assume there's nothing else I need to know?

[+] areski|11 years ago|reply
Totally agree, peoples needs to know how it works before starting using it.
[+] davesque|11 years ago|reply
From what little I've gathered, the internal implementation is basically a system which tracks hash-tagged copies of database tables.
[+] davesque|11 years ago|reply
There are only like 700 lines of Python code. Figure it out and submit a pull request to update the README.
[+] amirmc|11 years ago|reply
If anyone's interested in git-like storage systems then it's work checking out Irmin [1]. Previous discussion is at [2].

Excerpt: "Irmin is a library to persist and synchronize distributed data structures both on-disk and in-memory. It enables a style of programming very similar to the Git workflow, where distributed nodes fork, fetch, merge and push data between each other. The general idea is that you want every active node to get a local (partial) copy of a global database and always be very explicit about how and when data is shared and migrated

Irmin is not, strictly speaking, a full database engine. It is, as are all other components of Mirage OS, a collection of libraries designed to solve different flavours of the challenges raised by the CAP theorem. Each application can select the right combination of libraries to solve its particular distributed problem."

[1] http://openmirage.org/blog/introducing-irmin

[2] https://news.ycombinator.com/item?id=8053687

[+] sgrove|11 years ago|reply
I've spent a day looking at Irmin, trying to figure out the JSON CRUD interface, but there's just no documentation, and the irc channel is pretty much dead. The code looks good, but it's my first introduction to OCaml, coming from Clojure. Would love to see some more documentation on it.
[+] mamcx|11 years ago|reply
This could work to syncronize data from multiple clients for a invoice, for example?
[+] falcolas|11 years ago|reply
So, it appears to just copy tables around within the database. I wouldn't want to use this on a DB over a few MB in size. Sure, restores are "fast" (a table rename), but copies are not so much.

I can't imagine this would be kind to a production database (lots of cleanup from copied & deleted tables), and would consume a lot more space than a gripped logical backup of the tables in question.

[+] obsession|11 years ago|reply
I have regularly used this with database that's nearing 1000 megabytes. I don't particularly mind slow snapshotting because my workflow is more about restoring database back to baseline than taking copies.

Please don't use this for production. It is not stable enough and you only end up with lost data.

[+] m3h|11 years ago|reply
Why does the author compare it to Git? The functions this software performs are no where near those performed by Git. Nor it is a proper version control system.
[+] lucian1900|11 years ago|reply
This sort of thing is useful, but already supported by Postgres through transactional DDL. Migrations that fail will have their transaction reverted.
[+] michaelmior|11 years ago|reply
Suppose I'm working on a new feature branch and I run a bunch of migrations and add new data. Then I need to go make a bug fix on the master branch. Using Stellar, I could just take a snapshot before starting my feature branch, restore the DB state, do my bug fix, then go back to the DB as it was in my feature branch. I haven't tried Stellar yet, but this seems like a scenario where it would be useful. (And definitely not supported through Postgres transactions.)
[+] obsession|11 years ago|reply
Transaction is only reverted if the migration fails. Stellar helps you if your migration succeeds but does the wrong thing (deleting wrong column, missing WHERE in UPDATE statement).
[+] killing_time|11 years ago|reply
The use case which this really excites me about is automated testing from the GUI level. If the performance is good enough, this would be really useful for restoring DB state in between tests.
[+] mdellabitta|11 years ago|reply
What if you're writing a migration that includes DML, and it fails partway through? Wouldn't it be good to be able to restore the DB state automatically and try again?
[+] Gigablah|11 years ago|reply
From the code:

    INSERT INTO %s.%s SELECT * FROM %s.%s
Yeah, good luck with that.
[+] kstrauser|11 years ago|reply
What's wrong with that? Assuming you escape the table names correctly, that seems reasonable. And if you can't escape the table names, you're going to have a hard time dynamically generating queries anyway. Parameterized queries are a baseline requirement for values, but are rarely supported by client libraries for things like table or schema names.
[+] bronson|11 years ago|reply
Nice. I wrote a similar tool for Rails / ActiveRecord models: https://github.com/bronson/table_differ

It takes snapshots and computes diffs between snapshots or the live database. It lets me drop and re-import some of my app's tables, then compute the minimum set of changes between the previous import and the new import. I wouldn't call it "git for ActiveRecord models" but it appears to be similar to this project.

Comments welcome! The docs, as always, could use some help.

[+] squigs25|11 years ago|reply
The implications for this extend beyond backing up your database.

Imagine a world where daily time-series data can be stored efficiently: This is a lesser known use case, but it works like this: I'm a financial company and I want to store 1000 metrics about a potential customer. Maybe the number of transactions in the past year, the number of defaults, the number of credit cards, etc.

Normally I would have to duplicate this row in the database every day/week/month/year for every potential customer. With some kind of git-like storing of diffs between the row today and the row yesterday, I could easily have access to time series information without duplicating unchanged information. This would accomplish MASSIVE storage savings.

FWIW efficiently storing time series data is big problem at my company. No off the shelf solution makes this easy for us right now, and we would rather throw cheap hard disk at the problem rather than expensive engineers.

[+] michaelmior|11 years ago|reply
There are a lot of existing compression algorithms for time series data that do just this. I'm not sure how well any of these are implemented however. I think the problem is not necessarily how the data is stored, since that's fairly easy to fix with a bit of engineering effort if you're willing to write your own system. The harder part is rewriting query engines to take advantage of this sort of compression. Although ideally this could just be abstracted away by the storage layer.
[+] beagle3|11 years ago|reply
Kx systems kdb+ does this incredibly quickly and easily. I'm sure OneTick, Vhayu and others do too, though I son't have experience with them.

If you insist on standard SQL databases for time series, you'll have a lot more pain

[+] symfrog|11 years ago|reply
Have you looked at Datomic? It seems to fit your problem description well.
[+] habosa|11 years ago|reply
What about Cassandra? I believe it efficiently stores multiple time values for each (row, column) value as it changes. Google's BigTable design does this, and I believe you can use BigTable through Appengine.
[+] Terr_|11 years ago|reply
Sounds like a case where Event-sourcing & CQRS might've been handy. (Not reality something you can easily bolt-on afterwards, though.]
[+] personZ|11 years ago|reply
Column-oriented databases virtually all feature this in the form of column compression (e.g. "repeat this value for the next 1000 rows"). And if you don't want column compression, they have sparse data filling/interpolation -- e.g. use the last available value from a time series. This is pretty much their bread and butter. Interpolation is essentially making the query engine smarter, so you don't end up in the situation you're apparently facing where you have to insert duplicate records purely to satisfy a simplistic join.

Back to this product (which appears to simply wholesale copy databases?), I use LVM for exactly what it is doing -- I create and rollback and access and update LVM snapshots of databases. The snapshots are instant, and in most situations the data duplications is very limited. LVM is one of the coolest, most under-appreciated facets of most Linux installs -- http://goo.gl/J2mIvG

[+] crad|11 years ago|reply
Maybe I'm missing something, but I didn't see anything with regard to indexes, users, stored procedures, views or what not.

Seems like it's for table schema snapshotting in a database without any external storage.

Browsing through the code, I see that it's highly table centric using SQLAlchemy.

[+] swehner|11 years ago|reply
Line 53 of https://github.com/fastmonkeys/stellar/blob/master/stellar/o... is

                CREATE TABLE %s.%s LIKE %s.%s
This made me think of a table called

                create table `a; drop table users;`  (col int);
... which works in mysql.

I don't know if the stellar code will trip over something like this. But mysql (SQL) shouldn't even allow names like that.

[+] obsession|11 years ago|reply
Yes, Stellar will probably trip over this. Similarly, if the attacker can edit stellar.yaml, they probably can edit your .bashrc as well.

Im adding this to my TODO list.

[+] codeoclock|11 years ago|reply
Unfortunate name, excellent project :)
[+] gexla|11 years ago|reply
To elaborate on what this comment may be referring to. A Ripple (cryptocurrency) fork was recently released with the name of Stellar.
[+] jdc0589|11 years ago|reply
Shameless plug for mite: https://github.com/jdc0589/mite-node Simple migrations that take advantage of everything you already know about git and sql, plus some other cool stuff.

It's not too mature yet, the readme is mediocre at best, and it has some issues that will popup when working with a team, but it's pretty damn useful.

[+] jimktrains2|11 years ago|reply
While not exactly the same thing, I've recently found and started using https://github.com/nkiraly/DBSteward to specify schema and then store the spec in my repo with the code. It also supports diffing the current schema against a previous one, so that nice upgrade sql scripts can be generated.
[+] iurisilvio|11 years ago|reply
I expected something related with Stellar coins.

Looks like a good project, I definitely want an easy way to manage development databases.

[+] level09|11 years ago|reply
This is a nice project. I used to have my database dump tracked by git (in binary mode). anytime my db changes I'll have to overwrite the file with the new database dump and include it with the commit.

I'm just wondering if this project offers anything special/better than the method I described.

[+] ZenoArrow|11 years ago|reply
Just a small correction; it's not PostreSQL, it's PostgreSQL.