We've been using SQLite in production as our exclusive means for getting bytes to/from disk for going on 6 years now. To this day, not one production incident can be attributed to our choice of database or how we use it.
We aren't using SQLite exactly as intended either. We have databases in the 100-1000 gigabyte range that are concurrently utilized by potentially hundreds or thousands of simultaneous users. Performance is hardly a concern when you have reasonable hardware (NVMe/SSD) and utilize appropriate configuration (PRAGMA journal_mode=WAL).
In our testing, our usage of SQLite vastly outperformed an identical schema on top of SQL Server. It is my understanding that something about not having to take a network hop and being able to directly invoke the database methods makes a huge difference. Are you able to execute queries and reliably receive results within microseconds with your current database setup?
Sure, there is no way we are going to be able to distribute/cluster our product by way of our database provider alone, but this is a constraint we decided was worth it, especially considering all of the other reduction in complexity you get with single machine business systems. I am aware of things like DQLite/RQLite/et.al., but we simply don't have a business case that demands that level of resilience (and complexity) yet.
Some other tricks we employ - We do not use 1 gigantic SQLite database for the entire product. It's more like a collection of microservices that live inside 1 executable with each owning an independent SQLite database copy. So, we would have databases like Users.db, UserSessions.db, Settings.db, etc. We don't have any use cases that would require us to write some complex reporting query across multiple databases.
> It is my understanding that something about not having to take a network hop and being able to directly invoke the database methods makes a huge difference. Are you able to execute queries and reliably receive results within microseconds with your current database setup?
with SQL Server you can get a very fast local connection by specify "server=(local)" in the connection string - this uses shared memory protocol bypassing the net stack.
How is this setup fault tolerant? What happens if there is a hardware failure? How do you partition access in a way that means an extremely active user doesn't impact availability?
As long as you're estimating your future costs correctly then you're golden. If you aren't and your application becomes complex through growth or you need resiliency then you'll need to pay that cost and that cost can be big.
Interesting. So I would ordinarily want to put a foreign key constraint on the user_id column of a UserSessions table (or similar). In general, presumably you have relationships across the tables that are in those discrete databases. Do you just enforce these constraints/do joins in code? It seems like splitting related tables across multiple databases loses some (possibly a lot?) of the benefits of relational DBs, so I'm curious how you handle/manage that.
That said, I love the idea this architecture. Might use it for whatever next dumb little web service I cook up! I love how this simplifies a lot of dev/deployment ops, perfect for a side project.
Are you capable of achieving no downtime deployment ?
I mean, on the product I currently work on, we have one mongo database, and a cluster of 4 pods on which our backend is deployed. When we want to deploy some new feature without having any downtime, one of the pod is be shut down, our product still work with the 3 remaining pods, and we start a new pod with the new code, and do this for the 4 pods.
But with SQLite, if I understand correctly, you have one machine or one VM, that is both running your backend, and storing your SQLite.db file. If you want to deploy some new features on your backend, can you achieve no downtime deployment ?
This is really interesting. Would you mind sharing a bit about how you keep the data that is shared across services in sync? Or is there a hard separation of concerns so that services would only commuincate with the service that owns that data to obtain it?
To my knowledge, WAL mode still needs to serialize writes for each database file. I'm assuming this is not a setup where there are too many concurrent writers?
I believe SQLite is about to explode in usage into areas it’s not been used before.
SQL.js[0] and the incredible “Absurd SQL”[1] are making it possible to build PWAs and hybrid mobile apps with a local SQL db. Absurd SQL uses IndexedDB as a block store fs for SQLite so you don’t have to load the whole db into memory and get atomic writes.
Also I recently discovered the Session Extension[2] which would potentially enable offline distributed updates with eventual consistency!
I can imagine building a SAAS app where each customer has a “workspace” each as a single SQLite db, and a hybrid/PWA app which either uses a local copy of the SQLite db synced with the session extension or uses a serveless backend (like CloudFlare workers) where a lightweight function performs the db operations. I haven’t yet found a nice way to run SQLite on CloudFlare workers, it need some sort of block storage, but it can’t be far off.
I evaluated sqlite for a web extension but ultimately decided it wasn't worth it. There is no easy way to save the data directly to the file system. And saving the data in other ways meant I was probably better off with IndexDB instead. Still it is a tempting option and one that seems to work well for separate tenancy.
When they tried to put SQLite into browsers I was excited but I had no project that needed it. It took years to finally get there, and wouldn’t you know it, they had just marked it as deprecated. KV stores are just not the same.
> I can imagine building a SAAS app where each customer has a “workspace” each as a single SQLite db
I did just that at my (now defunct) startup a few years ago. We were building a collaborative database query tool. The software we built used sqlite to keep a local db for storing user credentials, caching query results, etc. Bonus, we were able to have the local database file be encrypted protecting the data at rest.
How would we deal with conflicts (e.g. syncing back several conflicting offline clients for the same user) with something based on the Session Extension?
I've always thought it interesting that there was a time when large(ish) websites were hosted using servers that would struggle to outperform a modern smart toaster or wristwatch, and yet modern web applications tend to demand a dramatic distributed architecture. I like the examples in this article showing what a single modern server can do when you're not scaling to Google's level.
As an aside, what about distributed derivatives of sqlite, like rqlite, as a response to the criticism that sqlite requires your database server to also be your web server. Could something like rqlite also provide a way for an sqlite database to grow into a distributed cluster at a later point?
> and yet modern web applications tend to demand a dramatic distributed architecture
The dirty secret is: Most of them don't really need that architecture. Most, and I mean +95% of websites would run just fine on a single box, running the websrv, db and whatever script-interpreter the backend runs on.
Sure, it may be a bit slower if its not hooked up to a global CDN, and if the box goes down the user gets 404. But its also alot easier for an admin to simply give the one box a good wallop and put it back online, since it so simple.
I don't disagree with you, a single server can go a really, really long way in scale before you run into problems. I know because I've done it a few times.
The problem to me isn't ability to scale on one server, it's the single point of failure. My biggest site is a wordpress box with one instance on a pretty big VPS. In the last year I've had several outages big enough to require a post-mortem (not complete outages, but periods with high levels of error rates/failures), and every time it has been because of internal data center networking issues at my preferred cloud provider (and thankfully their customer service is amazing and they will tell me honestly what the problem was instead of leaving me to wonder and guess). So the main incentive for me to achieve horizontal scalability in that app is not scaling, it's high availability so I can survive temporary outages because of hardware or networking, and other stuff outside of my control.
Consider, though, that in the days past (when your server would be probably on an equivalent of dual-socket Pentium 166-MMX), most clients would be coming from slow links like 33.4-56.1kbps dialup, and it wouldn't be a problem to serve them at all. Links were slow, users were patient, timeouts were high, webpages were sort of slim. Although if you ask me, they always have been heavy, just within the constraints of their time.
Then, of course, there was ISDN and xDSL, which would give you true to god whopping 128 kbits/s for a while. 64 kpbs if you were cheap. It took a while to get to affordable multiples of Mbits per second.
Now that there's at least 10 Mbps uplink from each residential subscriber, doesn't take long to DoS even a beefy server.
And I'd say that server-side, things improved vastly with advent of FastCGI and its equivalents. Back in that heyday of your P166-MMX server, it was CGI with Perl, spawning a process for each incoming request, or "blazing-fast" Apache's server-side includes, or other things like that. Maybe mod_perl with its caveats on memory sharing.
Anyway, you're right in that whenever you show them a wider pipe, they will find more stuff to congest it with.
Not even a decade ago we were hosting our web-facing services and our postgres DB on the same server. A lot of smaller projects had their own db and credentials on the same instance. The idea of having to spin up independent DB servers for each one seemed wasteful to us, since back then each box would cost at least £10/mo or more on Linode or Rackspace and it meant more secrets to keep, more stuff to secure, more stuff to maintain.
It was only with the advent of heroku and the sudden shift to the cloud (AWS) that the DB stopped being software you ran on your box as a daemon, and became essentially a dedicated resource. Even for services running at a completely trivial scale.
It's mostly because there is a demand for HA which requires multiple replicas. The moment you start going down that route you increase complexity.
Whether most things actually require HA is debatable, but a lot of businesses make it a core requirement and so it gets baked into the architecture. Personally I feel like most stuff would be better suited to having fast fail-over and recovery early on, but my advice rarely gets taken. Instead you end up with complicated HA architectures that nobody totally understands, which then (inevitable still) fall over and take hours to recover.
Edge networking should work for SaaS, especially if your tenants tend to be geographically clustered. Stand up stateless front end web servers in half a dozen regions, and have them talk back to a more central region, with caches and circuit breakers to control the amount of cross-region traffic. That geographical separation will also make diurnal traffic patterns more obvious because you need more servers later in the day in PST than in EST.
You don’t have to distribute everything to scale pretty high.
There are some important things that SQLite does not do.
It is not client/server; a process must be able to fopen() the database file. NFS and SMB are options that can convey access to remote systems, but performance will not likely be good.
Only a single process can write to the database at any time; it does not support concurrent writers.
The backup tools do not support point-in-time recovery to a specific past time.
If your application can live with these limitations, then it does have some wonderful features.
For someone who's completely allergic to SSH and linux, a managed Postgres service will take care of all that too.
SQLite seems simple in that its "just a file". But its not. You can't pretend a backup is just copying the file while a DB is operating and expect it to be consistent. You can't put the file on NFS and have multiple writers and expect it to work. You can't use complex datatypes or have the database catch simple type errors for you. Its "simple" in precisely the wrong way - it looks simple, but actually using it well is not simple. It doesn't truly reduce operational burden, it only hides it until you find that it matters.
Similarly postgres is not automatically complex simply because it _can_ scale. It really is a good technology that can be simple at small scale yet complex if you need it.
Am I the only one who thinks SQLite is still too complicated for many programs? Maybe it's just the particular type of software I normally work on, which tends towards small, self-hosted networking services[0] that would often have a single user, or maybe federated with <100 users. These programs need a small amount of state for things like tokens, users accounts, and maybe a bit of domain-specific things. This can all live in memory, but needs to be persisted to disk on writes. I've reached for SQLite several times, and always come back to just keeping a struct of hashmaps[1] in memory and dumping JSON to disk. It's worked great for my needs.
Now obviously if I wanted to scale up, at some point you would have too many users to fit in memory. But do programs at that scale actually need to exist? Why can't everyone be on a federated server with state that fits in memory/JSON? I guess that's more of a philosophical question about big tech. But I think it's interesting that most of our tech stack choices are driven by projects designed to work at a scale most of us will never need, and maybe nobody needs.
As an aside, is there something like SQLite but closer to my use cases? So I guess like the nosql version of SQLite.
I have tried adopting SQLite in my side projects. The problem I encountered is that using managed PostgreSQL/MySQL is still more convenient and more reliable than using SQLite on a bare metal VPS. I like to use Heroku or Digital Ocean App platform because I want to spend time creating and not managing the infrastructure (ci/cd, ssl certs, reverse proxy, db backup, scaling, container management and what not). I tried looking for a managed SQLite but could not find one. On an unrelated note I found using Redis a good lightweight alternative to the classical psql/MySQL. Although still multi-tier and more difficult to model data, it’s initially cheaper and easier to manage than its relational counterparts. Anyone has had similar setup/preference?
In the past I had a website with hundreds of gigs of data that needed updating regularly, but could be read-only from the web server perspective.
I used sqlite for that, and had a mysql server for the user data and stuff that needed to be written to. Performance was fantastic, users were happy, data updates were instantaneous ; copy the new data to the server then repoint a symlink.
Most of my work is modeling and simulation. Sqlite is almost always my output format ; one case per database is really natural and convenient, both for analysis, and run management.
I'm exactly at a point where I'm considering SQLite for its single file db advantage, but I'm struggling to find solutions for my use case.
I need to import some 30k JSONs of external monitor data from Lunar (https://lunar.fyi) into a normalized form so that everyone can query it.
I'd love to get this into a single SQLite file that can be served and cached through CDN and local browser cache.
But is there something akin to Metabase that could be used to query the db file after it was downloaded?
I know I could have a Metabase server that could query the SQLite DB on my server, but I'd like the db and the queries to run locally for faster iteration and less load on my server.
Besides, I'm reluctant to run a public Metabase instance given the log4j vulnerabilities that keep coming.
I've had great success using SQLite as both a desktop application file format and web server database. I'll mention just one thing I like about it in the desktop application realm: undo/redo is implemented entirely within SQLite using in-memory tables and triggers following this as a starting point: https://www.sqlite.org/undoredo.html
I love to see that more projects are using SQLite as their main database.
One thing that I always wondered though: does anyone knows a big project/service that uses Golang and is backed by SQLite? This because SQLite would require CGO and CGO generally adds extra complexities and performance costs. I wonder how big Golang applications fare with this.
Not a "big project/service" but a Go project that uses Sqlite is one of my own, Timeliner[1] and its successor, Timelinize[2] (still in development). Yeah the cgo dependency kinda sucks but you don't feel it in code, just compilation. And it easily manages Timeline databases of a million and more entries just fine.
CGO isnt too big a problem and if it is a real dealbreaker something like https://pkg.go.dev/modernc.org/sqlite will work as it transpiled the c into go and passes the sqlite test suite. I think there is performance degradation with writes but reads are still pretty quick.
Using https://github.com/mattn/go-sqlite3 for a pet project right now.
So far not a single issue with it, I never had to think about any CGO specifics.
No reader-writer lock. Still only 1 concurrent writer, but write via append to WAL file is cheaper. Can adjust read vs write performance by syncing WAL file more or less often. Can also increase performance with lower durability by not syncing WAL file to disk as often
SQLite is great, but it's not a more simple drop in replacement for DB servers like HN often suggests it is.
My team at work has adopted it and generally likes it, but the biggest hurdle we've found is that it's not easy to inspect or fix data in production the way we would with postgres.
Nim forum uses SQLite as its db since 2012 and it fits perfectly the article’s use case. Code is available and it can be used to run a discourse inspired forum (although much less featured).
An interview with one of the creators:Mr. Richard Hipp - for a better and deeper understanding what pitch they took and what industries they were in to. Their approach to overcome the db-world that they saw in front of them. See the obstacles and the solutions and why it came to be that underestimated 'sqlite' that powers a good chunk of all you mobile actions triggered by your apps - but just read that interview - i cannot reproduce the dramatic here in my own words (underestimated).
The scant upside of 10-50x supposed query latency increase is likely to be worth little. In the extreme this is low single-digit milliseconds, so will be dwarfed by network hops.
In return for the above, you’ve coupled your request handler and it’s state, so you won’t be able to treat services as ephemeral. Docker and Kubernetes, for instance, become difficult. You now require a heap of error-prone gymnastics to manage your service.
If the query latency really matters, use an in memory db such as Redis.
SQLite is great for embedded systems where you’re particularly hardware constrained and know your machine in advance. It could also be a reasonable option for small webservers running locally. For anything remote, or with the slightest ambition of scale, using SQLite is very likely a bad trade off.
[+] [-] bob1029|4 years ago|reply
We aren't using SQLite exactly as intended either. We have databases in the 100-1000 gigabyte range that are concurrently utilized by potentially hundreds or thousands of simultaneous users. Performance is hardly a concern when you have reasonable hardware (NVMe/SSD) and utilize appropriate configuration (PRAGMA journal_mode=WAL).
In our testing, our usage of SQLite vastly outperformed an identical schema on top of SQL Server. It is my understanding that something about not having to take a network hop and being able to directly invoke the database methods makes a huge difference. Are you able to execute queries and reliably receive results within microseconds with your current database setup?
Sure, there is no way we are going to be able to distribute/cluster our product by way of our database provider alone, but this is a constraint we decided was worth it, especially considering all of the other reduction in complexity you get with single machine business systems. I am aware of things like DQLite/RQLite/et.al., but we simply don't have a business case that demands that level of resilience (and complexity) yet.
Some other tricks we employ - We do not use 1 gigantic SQLite database for the entire product. It's more like a collection of microservices that live inside 1 executable with each owning an independent SQLite database copy. So, we would have databases like Users.db, UserSessions.db, Settings.db, etc. We don't have any use cases that would require us to write some complex reporting query across multiple databases.
[+] [-] gfody|4 years ago|reply
with SQL Server you can get a very fast local connection by specify "server=(local)" in the connection string - this uses shared memory protocol bypassing the net stack.
[+] [-] qvrjuec|4 years ago|reply
[+] [-] scottcodie|4 years ago|reply
[+] [-] twicetwice|4 years ago|reply
That said, I love the idea this architecture. Might use it for whatever next dumb little web service I cook up! I love how this simplifies a lot of dev/deployment ops, perfect for a side project.
[+] [-] lisper|4 years ago|reply
How do you do joins?
[+] [-] La-Douceur|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] gunshowmo|4 years ago|reply
[+] [-] plq|4 years ago|reply
[+] [-] starik36|4 years ago|reply
[+] [-] nurettin|4 years ago|reply
[+] [-] samwillis|4 years ago|reply
SQL.js[0] and the incredible “Absurd SQL”[1] are making it possible to build PWAs and hybrid mobile apps with a local SQL db. Absurd SQL uses IndexedDB as a block store fs for SQLite so you don’t have to load the whole db into memory and get atomic writes.
Also I recently discovered the Session Extension[2] which would potentially enable offline distributed updates with eventual consistency!
I can imagine building a SAAS app where each customer has a “workspace” each as a single SQLite db, and a hybrid/PWA app which either uses a local copy of the SQLite db synced with the session extension or uses a serveless backend (like CloudFlare workers) where a lightweight function performs the db operations. I haven’t yet found a nice way to run SQLite on CloudFlare workers, it need some sort of block storage, but it can’t be far off.
0: https://sql.js.org/
1: https://github.com/jlongster/absurd-sql
2: https://www.sqlite.org/sessionintro.html
[+] [-] paulryanrogers|4 years ago|reply
[+] [-] hinkley|4 years ago|reply
[+] [-] rmbyrro|4 years ago|reply
[1] https://aws.amazon.com/blogs/compute/using-amazon-efs-for-aw...
[+] [-] pfarrell|4 years ago|reply
I did just that at my (now defunct) startup a few years ago. We were building a collaborative database query tool. The software we built used sqlite to keep a local db for storing user credentials, caching query results, etc. Bonus, we were able to have the local database file be encrypted protecting the data at rest.
[+] [-] koeng|4 years ago|reply
Very useful for pure go applications!
[+] [-] dorian-marchal|4 years ago|reply
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] lenkite|4 years ago|reply
[+] [-] bonyt|4 years ago|reply
As an aside, what about distributed derivatives of sqlite, like rqlite, as a response to the criticism that sqlite requires your database server to also be your web server. Could something like rqlite also provide a way for an sqlite database to grow into a distributed cluster at a later point?
https://github.com/rqlite/rqlite
[+] [-] usrbinbash|4 years ago|reply
The dirty secret is: Most of them don't really need that architecture. Most, and I mean +95% of websites would run just fine on a single box, running the websrv, db and whatever script-interpreter the backend runs on.
Sure, it may be a bit slower if its not hooked up to a global CDN, and if the box goes down the user gets 404. But its also alot easier for an admin to simply give the one box a good wallop and put it back online, since it so simple.
https://idlewords.com/talks/website_obesity.htm#heavyclouds
[+] [-] freedomben|4 years ago|reply
The problem to me isn't ability to scale on one server, it's the single point of failure. My biggest site is a wordpress box with one instance on a pretty big VPS. In the last year I've had several outages big enough to require a post-mortem (not complete outages, but periods with high levels of error rates/failures), and every time it has been because of internal data center networking issues at my preferred cloud provider (and thankfully their customer service is amazing and they will tell me honestly what the problem was instead of leaving me to wonder and guess). So the main incentive for me to achieve horizontal scalability in that app is not scaling, it's high availability so I can survive temporary outages because of hardware or networking, and other stuff outside of my control.
[+] [-] WesolyKubeczek|4 years ago|reply
Then, of course, there was ISDN and xDSL, which would give you true to god whopping 128 kbits/s for a while. 64 kpbs if you were cheap. It took a while to get to affordable multiples of Mbits per second.
Now that there's at least 10 Mbps uplink from each residential subscriber, doesn't take long to DoS even a beefy server.
And I'd say that server-side, things improved vastly with advent of FastCGI and its equivalents. Back in that heyday of your P166-MMX server, it was CGI with Perl, spawning a process for each incoming request, or "blazing-fast" Apache's server-side includes, or other things like that. Maybe mod_perl with its caveats on memory sharing.
Anyway, you're right in that whenever you show them a wider pipe, they will find more stuff to congest it with.
[+] [-] ljm|4 years ago|reply
It was only with the advent of heroku and the sudden shift to the cloud (AWS) that the DB stopped being software you ran on your box as a daemon, and became essentially a dedicated resource. Even for services running at a completely trivial scale.
[+] [-] voidfunc|4 years ago|reply
Whether most things actually require HA is debatable, but a lot of businesses make it a core requirement and so it gets baked into the architecture. Personally I feel like most stuff would be better suited to having fast fail-over and recovery early on, but my advice rarely gets taken. Instead you end up with complicated HA architectures that nobody totally understands, which then (inevitable still) fall over and take hours to recover.
[+] [-] hinkley|4 years ago|reply
You don’t have to distribute everything to scale pretty high.
[+] [-] chasil|4 years ago|reply
It is not client/server; a process must be able to fopen() the database file. NFS and SMB are options that can convey access to remote systems, but performance will not likely be good.
Only a single process can write to the database at any time; it does not support concurrent writers.
The backup tools do not support point-in-time recovery to a specific past time.
If your application can live with these limitations, then it does have some wonderful features.
[+] [-] zaptheimpaler|4 years ago|reply
This is what it takes to run a basic postgres database on my own PC (in a docker compose file):
For someone who's completely allergic to SSH and linux, a managed Postgres service will take care of all that too.SQLite seems simple in that its "just a file". But its not. You can't pretend a backup is just copying the file while a DB is operating and expect it to be consistent. You can't put the file on NFS and have multiple writers and expect it to work. You can't use complex datatypes or have the database catch simple type errors for you. Its "simple" in precisely the wrong way - it looks simple, but actually using it well is not simple. It doesn't truly reduce operational burden, it only hides it until you find that it matters.
Similarly postgres is not automatically complex simply because it _can_ scale. It really is a good technology that can be simple at small scale yet complex if you need it.
[+] [-] greatjack613|4 years ago|reply
Performs much better then postgres in terms of query latency which is ultra important for the domain we operate in.
I take machine level backups every 2 hours, so in the event of an outage, just boot the disk image on a new vm and it's off.
I would never do this on my professional job due to the stigma, but for this side project, it has been incredible
[+] [-] anderspitman|4 years ago|reply
Now obviously if I wanted to scale up, at some point you would have too many users to fit in memory. But do programs at that scale actually need to exist? Why can't everyone be on a federated server with state that fits in memory/JSON? I guess that's more of a philosophical question about big tech. But I think it's interesting that most of our tech stack choices are driven by projects designed to work at a scale most of us will never need, and maybe nobody needs.
As an aside, is there something like SQLite but closer to my use cases? So I guess like the nosql version of SQLite.
[0]: https://boringproxy.io/
[1]: https://github.com/boringproxy/boringproxy/blob/master/datab...
[+] [-] hankchinaski|4 years ago|reply
[+] [-] chunkyks|4 years ago|reply
I used sqlite for that, and had a mysql server for the user data and stuff that needed to be written to. Performance was fantastic, users were happy, data updates were instantaneous ; copy the new data to the server then repoint a symlink.
Most of my work is modeling and simulation. Sqlite is almost always my output format ; one case per database is really natural and convenient, both for analysis, and run management.
Anyway. Sqlite is amazing.
[+] [-] alin23|4 years ago|reply
I need to import some 30k JSONs of external monitor data from Lunar (https://lunar.fyi) into a normalized form so that everyone can query it.
I'd love to get this into a single SQLite file that can be served and cached through CDN and local browser cache.
But is there something akin to Metabase that could be used to query the db file after it was downloaded?
I know I could have a Metabase server that could query the SQLite DB on my server, but I'd like the db and the queries to run locally for faster iteration and less load on my server.
Besides, I'm reluctant to run a public Metabase instance given the log4j vulnerabilities that keep coming.
[+] [-] iffycan|4 years ago|reply
It's not perfect, but it fills the niche nicely.
[+] [-] brunoluiz|4 years ago|reply
One thing that I always wondered though: does anyone knows a big project/service that uses Golang and is backed by SQLite? This because SQLite would require CGO and CGO generally adds extra complexities and performance costs. I wonder how big Golang applications fare with this.
[+] [-] mholt|4 years ago|reply
[1]: https://github.com/mholt/timeliner
[2]: https://twitter.com/timelinize
[+] [-] anderspitman|4 years ago|reply
Isn't there some issue where SQLite basically has to be single-threaded in Golang programs, at least if you use the stdlib SQL library?
[0]: https://www.arp242.net/static-go.html
[+] [-] hoaljasio|4 years ago|reply
CGO isnt too big a problem and if it is a real dealbreaker something like https://pkg.go.dev/modernc.org/sqlite will work as it transpiled the c into go and passes the sqlite test suite. I think there is performance degradation with writes but reads are still pretty quick.
[+] [-] koeng|4 years ago|reply
https://pkg.go.dev/modernc.org/sqlite
It works well, but the performance is worse than C version. Not a big deal for what I used it for, though. It was approx. 6x worse at inserts.
[+] [-] usrbinbash|4 years ago|reply
[+] [-] srcreigh|4 years ago|reply
No reader-writer lock. Still only 1 concurrent writer, but write via append to WAL file is cheaper. Can adjust read vs write performance by syncing WAL file more or less often. Can also increase performance with lower durability by not syncing WAL file to disk as often
https://www.sqlite.org/wal.html
[+] [-] pgwhalen|4 years ago|reply
My team at work has adopted it and generally likes it, but the biggest hurdle we've found is that it's not easy to inspect or fix data in production the way we would with postgres.
[+] [-] pietroppeter|4 years ago|reply
https://github.com/nim-lang/nimforum
[+] [-] matdehaast|4 years ago|reply
[0] https://github.com/benbjohnson/litestream
[+] [-] reneberlin|4 years ago|reply
https://corecursive.com/066-sqlite-with-richard-hipp/
An interview with one of the creators:Mr. Richard Hipp - for a better and deeper understanding what pitch they took and what industries they were in to. Their approach to overcome the db-world that they saw in front of them. See the obstacles and the solutions and why it came to be that underestimated 'sqlite' that powers a good chunk of all you mobile actions triggered by your apps - but just read that interview - i cannot reproduce the dramatic here in my own words (underestimated).
[+] [-] aynyc|4 years ago|reply
[+] [-] Mikepicker|4 years ago|reply
It's a "read only" and small website (at least for now), with just a bunch of daily visitors, a perfect use case for SQLite.
Funny thing is that in my case the database it's so small that it's pushed directly on the repo.
Especially for startups and little projects, SQLite is your best friend.
[+] [-] JsticeJ|4 years ago|reply
The scant upside of 10-50x supposed query latency increase is likely to be worth little. In the extreme this is low single-digit milliseconds, so will be dwarfed by network hops.
In return for the above, you’ve coupled your request handler and it’s state, so you won’t be able to treat services as ephemeral. Docker and Kubernetes, for instance, become difficult. You now require a heap of error-prone gymnastics to manage your service.
If the query latency really matters, use an in memory db such as Redis.
SQLite is great for embedded systems where you’re particularly hardware constrained and know your machine in advance. It could also be a reasonable option for small webservers running locally. For anything remote, or with the slightest ambition of scale, using SQLite is very likely a bad trade off.