As a backend database that's not multi user, how many web connections that do writes can it realistically handle? Assuming writes are small say 100+ rows each?
After 2 years in production with a small (but write heavy) web service... it's a mixed bag. It definitely does the job, but not having a DB server does have not only benefits, but also drawbacks. The biggest being (lack of) caching the file/DB in RAM. As a result I have to do my own read caching, which is fine in Rust using the mokka caching library, but it's still something you have to do yourself, which would otherwise come for free with Postgres.
This of course also makes it impossible to share the cache between instances, doing so would require employing redis/memcached at which point it would be better to use Postgres.
It has been OK so far, but definitely I will have to migrate to Postgres at one point, rather sooner than later.
How would caching on the db layer help with your web service?
In my experience, caching makes most sense on the CDN layer. Which not only caches the DB requests but the result of the rendering and everything else. So most requests do not even hit your server. And those that do need fresh data anyhow.
Couple thousand simultaneous should be fine, depending on total system load, whether you're running on spinning disks or on SSDs, p50/99 latency demands and of course you'd need to enable the WAL pragma to allow simultaneous writes in the first place. Run an experiment to be sure about your specific situation.
If your writes are fast, doing them serially does not cause anyone to wait.
How often does the typical user write to the DB? Often it is like once per day or so (for example on hacker news). Say the write takes 1/1000s. Then you can serve
1000 * 60 * 60 * 24 = 86 million users
And nobody has to wait longer than a second when they hit the "reply" button, as I do now ...
> If your writes are fast, doing them serially does not cause anyone to wait.
Why impose such a limitation on your system when you don't have to by using some other database actually designed for multi user systems (Postgres, MySQL, etc)?
That depends on the use case. HN is not a good example. I am referring to business applications where users submit data. Ofc in these cases we are looking at 00s not millions of users. The answer is good enough.
Turns out a lot when you have things like "last accessed" timestamps on your models.
Really depends on the app
I also don't think that calculation is valid. Your users aren't going to be purely uniformly accessing the app over the course of a day. Invariably you'll have queuing delays above a significantly smaller user count (but maybe the delays are acceptable)
loxs|1 month ago
It has been OK so far, but definitely I will have to migrate to Postgres at one point, rather sooner than later.
TekMol|1 month ago
In my experience, caching makes most sense on the CDN layer. Which not only caches the DB requests but the result of the rendering and everything else. So most requests do not even hit your server. And those that do need fresh data anyhow.
kopirgan|1 month ago
WJW|1 month ago
laurencerowe|1 month ago
https://www.sqlite.org/src/doc/begin-concurrent/doc/begin_co...
TekMol|1 month ago
If your writes are fast, doing them serially does not cause anyone to wait.
How often does the typical user write to the DB? Often it is like once per day or so (for example on hacker news). Say the write takes 1/1000s. Then you can serve
And nobody has to wait longer than a second when they hit the "reply" button, as I do now ...frje1400|1 month ago
Why impose such a limitation on your system when you don't have to by using some other database actually designed for multi user systems (Postgres, MySQL, etc)?
kopirgan|1 month ago
nijave|1 month ago
Turns out a lot when you have things like "last accessed" timestamps on your models.
Really depends on the app
I also don't think that calculation is valid. Your users aren't going to be purely uniformly accessing the app over the course of a day. Invariably you'll have queuing delays above a significantly smaller user count (but maybe the delays are acceptable)