top | item 31909365

(no title)

xsc | 3 years ago

It seems a solid first step would be to use EXPLAIN to understand query planning/scanning issues.

Assuming 6.5M rows/day*1.5yr=~3.5B rows, how large is the dataset in both SQLite and pg? With indexes, is it fitting in RAM?

Linode 2GB plan is shared vcpu, so results will vary depending on who is sharing it with your service.

discuss

order

miguelgrinberg|3 years ago

This isn't the topic of the article so I haven't included it, but I have taken several "solid steps" to optimize this application throughout its year and a half of being in production.

This article is about one of them only, how I designed a test system to evaluate the two databases head to head.

yellowapple|3 years ago

> This isn't the topic of the article

Query planning seems like it should be the topic of the article. SQLite and PostgreSQL do their query planning very differently - not to mention that SQLAlchemy is probably generating different queries. I can't really fathom how one can meaningfully "evaluate the two databases head to head" while seeming to actively avoid how they plan/optimize/execute queries.