top | item 34603871

(no title)

ddyevf635372 | 3 years ago

Running sqlite in memory as a test db speeds up your test runner as crazy. You can do this if you use an sql query builder library, because it can translate your queries to the specific database.

discuss

order

whalesalad|3 years ago

This can be a good fast/local test or maybe a sanity test ... but there are definitely differences between databases that need to be accounted for. You wanna take that green test pass with a bit of skepticism. So you always want to test on the same DB engine that is running your prod workloads. If your surface area is small, you can get by with the approach you mentioned, but it would need to be marked tech debt that should be paid down as soon as possible, or as soon as that bug that manifests itself in PSQL but not sqlite appears :)

olau|3 years ago

On Postgres you can run

SET SESSION synchronous_commit TO OFF;

(Update: I just looked in our test code, you can also replace the CREATE TABLE commands with "CREATE UNLOGGED TABLE" to disable write-ahead logging.)

There are possibly other tricks?

I slightly disagree with the tech debt comment, though. If you get a huge speed up, it may be worth paying for the occasional bug, depending on the circumstances. Or you could do both, and only run the test on Postgres occasionally.

bluGill|3 years ago

Sqlite is the most popular database in the world by a large margin. While you are correct for those who use other databases, the majority case you are wrong.

Of course most people who have complex queries are probably not using sqlite and so may not care about testing the database.

danielheath|3 years ago

You can get the 'in-memory' speed advantage by putting the datastore on a ramdisk (or even just disabling fsync, which is pretty easy to do in postgresql).

nicoburns|3 years ago

There’s also the eatmydata program which does similar by using LD_PRELOAD to intercept io calls.

rc_mob|3 years ago

problem is that its not always compatible with features you use on your production database

waste_monk|3 years ago

Can split test regime so that as much as possible is covered with SQLite, and then have a second test phase with a heavyweight db only if the first phase passes. So code errors, malformed SQL, etc. cause it to fail fast and early, and you only test with the real DB once you know everything else is working.

Or along similar lines you could divide it such that developers can test things locally on their machines with SQLite, but once it gets pushed into CI (and passes code review etc.) it's tested against the heavy db.

gardenhedge|3 years ago

So you test against a different database technology than the one you software uses? I understand why that works but it seems odd

quantified|3 years ago

A real nice thing about Postgres and Mysql is that in the JVM world the H2 and HSQLDB engines have large compatibility, you can use them in-JVM for unit test speed in many cases. Doesn't help developing the SQL, does help with testing.

Snowflake, on the other hand, is just special.

sicp-enjoyer|3 years ago

Why should this be faster than a local postgres instance with no traffic?

dragonwriter|3 years ago

Because you have neither IPC nor IO, whereas with a local postgres server instance you have both?