top | item 41387097

(no title)

jomohke | 1 year ago

I think that's why the GP said "on the same machine" — I read the article as comparing to the typical DB that is located on a separate machine in the data center, accessed over a network, as is currently the common setup.

But if we're considering running SQLite, the apt comparison would be against other DBs running on the same machine, because we've already decided local data storage is acceptable.

I assume Postgres/MySQL/etc would have higher latency than SQLite due to IPC overhead — but how significant is it?

discuss

order

jomohke|1 year ago

(replying to myself)

I ran a quick, non-scientific Python script on my Macbook M2 using local Postgres and SQLite installs with no tuning.

It does 200 simple selects, after inserting random data, to match the article's "200 queries to build one web page".

(We're mostly worrying about real-world latency here, so I don't think the exact details of the simple queries matter. I ran it a few times to check that the values don't change drastically between runs.)

    SQLite:
    Query Times:
      Min: 0.007 ms
      Max: 0.031 ms
      Mean: 0.007 ms
      Median: 0.007 ms
    Total for 200 queries: 1.126 ms

    PostgreSQL:
    Query Times:
      Min: 0.023 ms
      Max: 0.170 ms
      Mean: 0.028 ms
      Median: 0.026 ms
    Total for 200 queries: 4.361 ms
(Again, this is very quick and non-optimised, so I wouldn't take the measured differences too seriously)

I've seen typical individual query latency of 4ms in the past when running DBs on separate machines, hence 200 queries would add almost an entire second to the page load.

But this is 4.3ms total, which sounds reasonable enough for most systems I've built.

A single non-trivial query required for page load could add more time than this entire latency overhead. So I'd probably pick DBs based on factors other than latency in most cases.