The details of how the data is stored are described in this 1979 paper by Douglas Comer: "The Ubiquitous B-Tree" [1]. Since it was invented, much hasn't really changed: practically all relational databases use some variation of this structure. I wrote about it in [2], which might help explain some aspects of how it works.
A lot of modern databases don't use Btrees anymore. With a LSM structure you can use other suitable data structures because they can be write-only. For example leveldb uses a simple binary search.
The SQLite code is nicely split into modules, and builds together reatively quickly. So the easier way to explore the codebase is to load into editor the individual modules at src/ rather than the aggregated sqlite3.c file which is too big and unmanageable in the editor.
IIRC, the heart of the db engine appears to be some sort of vm that processes the parsed out SQL. Stepping through in debugger will inevitably capture you in the loop which executes the statements.
My c is pretty limited and I was able to hack in support for table prefixed columns in result sets over a night or two from never seeing the code base before. It was fun to work with!
they also provide an amalgamation header which is generated as part of their build process, so that you can simply add a .h and .c file to your project and have the whole sqlite goodness linked directly into your project.
Anyone looked at DuckDB [1]? Like SQLite, it's an embedded relational database with a C API. In fact, it has a wrapper which presents the same API as SQLite [2]. But it's optimised for analytical rather than transactional work - a few huge queries rather than lots of little reads and writes.
DuckDB offers a SQLite compatible interface; we use it extensively ourselves as it allows us to use the SQLite shell directly.
The internal design of DuckDB, however, is different and not directly related to SQLite. As you mentioned, it is entirely optimized for analytical workloads whereas SQLite is optimized for point-queries and point-updates. The main things we share is that we also have a single-file storage format and offer an amalgamation (i.e. single-source file compilation, duckdb.cpp and duckdb.hpp).
So I guess I can use SQLite connectors or GUI to play with it? I was just looking for this.
Not a dev so I'm a bit dumb with this stuff, but I was definitely looking for pivoting stuff and such, which requires a lot of extra steps with sqlite.
Lookup is easy. Update is hard. Sqlite does quite well at update, given the limitations it works under. Multiple instances can update the same database while maintaining strong consistency, yet there's no common server process that can see all the transactions. This costs you some speed in updating (don't do your web service this way) but it's pretty good for most low-write-traffic use cases.
> I’m still not super clear on how exactly the interior pages are structured and how the pointers to their child pages work. It’s time to sleep now, but perhaps that will happen another day!
Luckily. The SQLite source code is very well documented, and the 'btreeInt.h' has something about how the btree pointers are laid out
I happen to use the SQLite btree directly without the SQL layer (the VDBE) as a key-value store on top of a userspace filesystem (chrome cachefs) and even with this configuration it works wonderfully.
** The basic idea is that each page of the file contains N database
** entries and N+1 pointers to subpages.
**
** --------------------------------------------------- -------------
** | Ptr(0) | Key(0) | Ptr(1) | Key(1) | ... | Key(N-1) | Ptr(N) |
** ----------------------------------------------------------------
**
** All of the keys on the page that Ptr(0) points to have values less
** than Key(0). All of the keys on page Ptr(1) and its subpages have
** values greater than Key(0) and less than Key(1). All of the keys
** on Ptr(N) and its subpages have values greater than Key(N-1). And
** so forth.
**
** Finding a particular key requires reading O(log(M)) pages from the
** disk where M is the number of entries in the tree.
**
** In this implementation, a single file can hold one or more separate
** BTrees. Each BTree is identified by the index of its root page. The
** key and data for any entry are combined to form the "payload". A
** fixed amount of payload can be carried directly on the database
** page. If the payload is larger than the preset amount then surplus
** bytes are stored on overflow pages. The payload for an entry
** and the preceding pointer are combined to form a "Cell". Each
** page has a small header which contains the Ptr(N) pointer and other
** information such as the size of key and data.
I guess if she go straight to use the btree layer, bypassing the SQL layer, it will be easier to figure it out how the pages and pointers works and relate to each other.
After this, try it out with the SQL and see how it works.
---
Edit: just to add to this, this line here
> Each BTree is identified by the index of its root page
When you call 'sqliteBtreeCreateTable()' it will return to you the root page of the table it creates. In my experience it starts from 2, and my guess is that 1 is probably some master root page (i bet Julia will dig into this later).
This is how i've manage to have "keyspaces" here. i use the first created table as a meta-table, keeping the index of keyspace names with their root page number counterparts, and on opening the db, is just a matter of recovering this info and put it in a hash table (keyspace => page number) for later use.
Embedded KV stores are as comparable to SQLite as Cassandra is to Postgres (i.e. lacking joins or other data modelling essentials), but I agree that embedding a database directly inside your app is interesting and opens up a world of possibilities for efficiently blending N+1 code and queries. It gets even more interesting to take it a step further and embed a networked-replica of a database à la Datomic Peers.
I've only used Badger as an exploratory side project but I've found it enjoyable insofar as it feels much easier to get set up than something like RocksDB
SQLite true value is in its cross-platform usability. If you want to have same codebase for your application on all current major operating systems (Win/Droid/Lin/Mac/iOS) for a local database you go with SQLite.
SQLite is not as good at keeping big records on server side, for that you go with PostgreSQL. And if you're Windows only, then Access is faster for a local database. But if you want a good enough local database, that performs the same on all of them, then your only choice is SQLite.
Years ago I spent an evening at Recurse Center (at the time Hacker School) in NYC. My employer sent me with some recruiters to chat with participants about hiring possibilities. It was just so cool.
That night I met Julia (she and a friend were, if I remember correctly, trying to boot a handwritten kernel in a VM and figured it out around 11pm that only the first 300 bytes were getting loaded into memory).
I met Andrew Kelley, who had recently written an NES emulator that translated the code to LLVM (if I remember correctly).
I briefly met Filippo Valsorda. I think he was the person who, after I said I wanted to make a bitcoin arbitrage bot, immediately came up with putting the prices into a matrix and then computing optimal paths.
The place just seemed.. magical. I have a family and kids in Canada but i thought a long time about trying to reorient my life to NYC for a bit. It was just so cool.
It reminded me of the joy of learning for the sake of learning, hacking stuff because it feels like a magic superpower to bring ideas to life, and the joy of building things you have no intention on trying to make money off of.
I'm certain none of those people remember me but I remember the evening fondly whenever I see any of them show up on HN. Cheers :)
I have a lot of respect for Recurse Center, and am a fan of the work that comes out of it. However, I feel that it is a very exclusive place, and if you are not good enough, you're just not welcome.
I applied for an internship there a couple years ago. After writing an essay on why I want to do RC, and an interview where, IIRC, I shared my camera but my interviewers did not, I was rejected with a boilerplate "not for us" explanation, and that was that.
I would've felt much better about it if they had at least invited me to visit, but this experience gave it a very corporate and impersonal feel.
Wow, I've been using sqlite all this time without knowing how the internals of it actual work. Would love more insight on its memory usage and how it handles memory pressure, etc
You can technically have many writer agents but they will all compete for an internal sqlite mutex so it's best you serialize write access in your own code.
[+] [-] gtrubetskoy|5 years ago|reply
[1] https://dl.acm.org/doi/10.1145/356770.356776
[2] https://grisha.org/blog/2013/05/11/relational-database-on-to...
Edit: also [3] is about how to store SQLite in a Redis hash (rather than on-disk file).
[3] https://grisha.org/blog/2013/05/29/sqlite-db-stored-in-a-red...
[+] [-] nn3|5 years ago|reply
[+] [-] zoomablemind|5 years ago|reply
IIRC, the heart of the db engine appears to be some sort of vm that processes the parsed out SQL. Stepping through in debugger will inevitably capture you in the loop which executes the statements.
Of course, there're plentiful design docs, all on https://sqlite.org.
[+] [-] bntyhntr|5 years ago|reply
[+] [-] nurettin|5 years ago|reply
[+] [-] twic|5 years ago|reply
[1] https://duckdb.org/
[2] https://github.com/cwida/duckdb/tree/master/tools/sqlite3_ap...
[+] [-] mytherin|5 years ago|reply
The internal design of DuckDB, however, is different and not directly related to SQLite. As you mentioned, it is entirely optimized for analytical workloads whereas SQLite is optimized for point-queries and point-updates. The main things we share is that we also have a single-file storage format and offer an amalgamation (i.e. single-source file compilation, duckdb.cpp and duckdb.hpp).
[+] [-] iagovar|5 years ago|reply
Not a dev so I'm a bit dumb with this stuff, but I was definitely looking for pivoting stuff and such, which requires a lot of extra steps with sqlite.
[+] [-] st1ck|5 years ago|reply
[+] [-] justinclift|5 years ago|reply
[+] [-] Animats|5 years ago|reply
[+] [-] oscargrouch|5 years ago|reply
Luckily. The SQLite source code is very well documented, and the 'btreeInt.h' has something about how the btree pointers are laid out
I happen to use the SQLite btree directly without the SQL layer (the VDBE) as a key-value store on top of a userspace filesystem (chrome cachefs) and even with this configuration it works wonderfully.
I guess if she go straight to use the btree layer, bypassing the SQL layer, it will be easier to figure it out how the pages and pointers works and relate to each other.After this, try it out with the SQL and see how it works.
---
Edit: just to add to this, this line here
> Each BTree is identified by the index of its root page
When you call 'sqliteBtreeCreateTable()' it will return to you the root page of the table it creates. In my experience it starts from 2, and my guess is that 1 is probably some master root page (i bet Julia will dig into this later).
This is how i've manage to have "keyspaces" here. i use the first created table as a meta-table, keeping the index of keyspace names with their root page number counterparts, and on opening the db, is just a matter of recovering this info and put it in a hash table (keyspace => page number) for later use.
[+] [-] dunham|5 years ago|reply
[+] [-] ramoz|5 years ago|reply
[+] [-] refset|5 years ago|reply
[+] [-] searchableguy|5 years ago|reply
[+] [-] carterklein13|5 years ago|reply
[+] [-] unnouinceput|5 years ago|reply
SQLite is not as good at keeping big records on server side, for that you go with PostgreSQL. And if you're Windows only, then Access is faster for a local database. But if you want a good enough local database, that performs the same on all of them, then your only choice is SQLite.
[+] [-] GuiA|5 years ago|reply
[+] [-] corty|5 years ago|reply
[+] [-] dang|5 years ago|reply
[+] [-] ed25519FUUU|5 years ago|reply
> May you find forgiveness for yourself and forgive others.
> May you share freely, never taking more than you give.
If only more software engineers started out their projects with this mindset, the world would be a very different place.
[+] [-] asveikau|5 years ago|reply
https://web.archive.org/web/20130203112329/http://dev.hasenj...
Found the link from the following old HN discussion: https://news.ycombinator.com/item?id=5138866 - but had to use archive.org as the article is now a 404.
[+] [-] helloiloveyou|5 years ago|reply
[+] [-] mosselman|5 years ago|reply
> If only more software engineers started out their projects with this mindset, the world would be a very different place.
Apparently this is a horrible thing to say?
[+] [-] stormdennis|5 years ago|reply
[+] [-] tekstar|5 years ago|reply
That night I met Julia (she and a friend were, if I remember correctly, trying to boot a handwritten kernel in a VM and figured it out around 11pm that only the first 300 bytes were getting loaded into memory).
I met Andrew Kelley, who had recently written an NES emulator that translated the code to LLVM (if I remember correctly).
I briefly met Filippo Valsorda. I think he was the person who, after I said I wanted to make a bitcoin arbitrage bot, immediately came up with putting the prices into a matrix and then computing optimal paths.
The place just seemed.. magical. I have a family and kids in Canada but i thought a long time about trying to reorient my life to NYC for a bit. It was just so cool.
It reminded me of the joy of learning for the sake of learning, hacking stuff because it feels like a magic superpower to bring ideas to life, and the joy of building things you have no intention on trying to make money off of.
I'm certain none of those people remember me but I remember the evening fondly whenever I see any of them show up on HN. Cheers :)
[+] [-] forgotmypw17|5 years ago|reply
I applied for an internship there a couple years ago. After writing an essay on why I want to do RC, and an interview where, IIRC, I shared my camera but my interviewers did not, I was rejected with a boilerplate "not for us" explanation, and that was that.
I would've felt much better about it if they had at least invited me to visit, but this experience gave it a very corporate and impersonal feel.
[+] [-] mkchoi212|5 years ago|reply
[+] [-] jokoon|5 years ago|reply
[+] [-] alexhutcheson|5 years ago|reply
[+] [-] blackrock|5 years ago|reply
Or is only one agent at a time, allowed to write to it?
[+] [-] pdimitar|5 years ago|reply
[+] [-] seemslegit|5 years ago|reply