You know you need to be careful when an Amazon engineer will argue for a database architecture that fully leverages (and makes you dependent of) the strengths of their employer's product. In particular:
> Commit-to-disk on a single system is both unnecessary (because we can replicate across storage on multiple systems) and inadequate (because we don’t want to lose writes even if a single system fails).
This is surely true for certain use cases, say financial applications which must guarantee 100% uptime, but I'd argue the vast, vast majority of applications are perfectly ok with local commit and rapid recovery from remote logs and replicas. The point is, the cloud won't give you that distributed consistency for free, you will pay for it both in money and complexity that in practice will lock you in to a specific cloud vendor.
I.e, make cloud and hosting services impossible to commoditize by the database vendors, which is exactly the point.
Skipping flushing the local disk seems rather silly to me:
- A modern high end SSD commits faster than the one way time to anywhere much farther than a few miles away. (Do the math. A few tens of microseconds specified write latency is pretty common. NVDIMMs (a sadly dying technology) can do even better. The speed of light is only so fast.
- Unfortunate local correlated failures happen. IMO it’s quite nice to be able to boot up your machine / rack / datacenters and have your data there.
- Not everyone runs something on the scale of S3 or EBS. Those systems are awesome, but they are (a) exceedingly complex and (b) really very slow compared to SSDs. If I’m going to run an active/standby or active/active system with, say, two locations, I will flush to disk in both locations.
I should add that the bond between relational databases and spinning rust goes back further. My dad, who started working as a programmer in the 60s with just magtape as storage, talked about the early era of disks as a big step forward but requiring a lot of detailed work to decide where to put the data and how to find it again. For him, databases were a solution to the problems that that disks created for programmers. And I can certainly imagine that. Suddenly you have to deal with way more data stored in multiple dimensions (platter, cylinder, sector) with wildly nonlinear access times (platter rotation, head movement). I can see how commercial solutions to that problem would have been wildly popular, but also build around solving a number of problems that don't matter.
I'm not sure I totally understand the timeline you're describing, but my understanding is that relational databases themselves were only invented in the 1970s. Is your reference to the 60s just giving context for when he started but before this link happened (with the idea that the problems predated the solution)?
> Design decisions like write-ahead logs, large page sizes, and buffering table writes in bulk were built around disks where I/O was SLOW, and where sequential I/O was order(s)-of-magnitude faster than random.
Overall speed is irrelevant, what mattered was the relative speed difference between sequential and random access.
And since there's still a massive difference between sequential and random access with SSDs, I doubt the overall approach of using buffers needs to be reconsidered.
Can you clarify? I thought a major benefit of SSDs is that there isn't any difference between sequential and random access. There's no physical head that needs to move.
Edit: thank you for all the answers -- very educational, TIL!
Same with doing things in RAM as well. Sequential writes and cache-friendly reads, which b-trees tend to achieve for any definition of cache. Some compaction/GC/whatever step at some point. Nothing's fundamentally changed, right?
Author could have started by surveying current state of art instead of just falsely assuming that DB devs have just been resting on the laurels for past decades. If you want to see (relational) DB for SSD just check out stuff like myrocks on zenfs+; it's pretty impressive stuff.
There has also been some significant academic study of DBMS design for persistent memory - which SSD technology can serve as (e.g. as NVDIMMs or abstractly) : Think of no distinction between primary and secondary storage, RAM and disk - there's just a huge amount of not-terribly-fast memory; and whatever you write to memory never goes away. It's an interesting model.
Only a few companies are global, so only a few of them should optimize for those kind of workload. However maybe every startup in SV must aim to becoming global, so probably that's what most of them must optimize for, even the ones that eventually fail to get traction.
24/7 is different because even the customers of local companies, even B2B ones, mighty feel like doing some work at midnight once in a while. They'll be disappointed to find the server down.
Not for SSD specifically, but I assume the compact design doesn't hurt: duckdb saved my sanity recently. Single file, columnar, with builtin compression I presume (given in columnar even simplest compression maybe very effective), and with $ duckdb -ui /path/to/data/base.duckdb opening a notebook in browser. Didn't find a single thing to dislike about duckdb - as a single user. To top it off - afaik can be zero-copy 'overlayed' on the top of a bunch of parquet binary files to provide sql over them?? (didn't try it; wd be amazing if it works well)
Median database workloads are probably doing writes of just a few bytes per transaction. Ie 'set last_login_time = now() where userid=12345'.
Due to the interface between SSD and host OS being block based, you are forced to write a full 4k page. Which means you really still benefit from a write ahead log to batch together all those changes, at least up to page size, if not larger.
A write-ahead log isn't a performance tool to batch changes, it's a tool to get durability of random writes. You write your intended changes to the log, fsync it (which means you get a 4k write), then make the actual changes on disk just as if you didn't have a WAL.
If you want to get some sort of sub-block batching, you need a structure that isn't random in the first place, for instance an LSM (where you write all of your changes sequentially to a log and then do compaction later)—and then solve your durability in some other way.
I keep seeing this crap ass design where developers or some product a-hole decides their schema for users needs a field in their table where it shows the last_login_time. It's a shit design, it does not scale, nor is it necessary at all to get that one piece of information.
Use a damned log/journal table if you absolutely must, but make sure it's a fire away and forget non-transactional insert because you morons keep writing software that requires locking the whole users table and that causes huge performance issues.
> WALs, and related low-level logging details, are critical for database systems that care deeply about durability on a single system. But the modern database isn’t like that: it doesn’t depend on commit-to-disk on a single system for its durability story. Commit-to-disk on a single system is both unnecessary (because we can replicate across storage on multiple systems) and inadequate (because we don’t want to lose writes even if a single system fails).
And then a bug crashes your database cluster all at once and now instead of missing seconds, you miss minutes, because some smartass thought "surely if I send request to 5 nodes some of that will land on disk in reasonably near future?".
I love how this industry invents best practices that are actually good then people just invent badly researched reasons to just... not do them.
The biggest lie we’ve been told is that databases require global consistency and a global clock. Traditional databases are still operating with Newtonian assumptions about absolute time, while the real world moves according to Einstein’s relativistic theory, where time is local and relative. You dont need global order, you dont need global clock.
Happens all the time (the ignores best practices because it’s convenient or ‘just because’ to do something different), literally everywhere including normal society.
An often underlooked aspect of SSDs for databases is write endurance. With the wrong workload random writes can burn through your drive in months instead of years. This is the hidden gem of LSM over b-tree - low write amplification, not just better write performance. Maybe doesn't really matter on AWS since you can just pitch your instance once you've trashed the SSDs
> Commit-to-disk on a single system is both unnecessary
If you believe this, then what you want already exists. For example: MySQL has in memory tables, but also this design pretty much sounds like NDB.
I don’t think I’d build a database the way they are describing for anything serious. Maybe a social network or other unimportant app where the consequences of losing data aren’t really a big deal.
At first glance this reads like a storage interface argument, but it’s really about media characteristics. SSDs collapse the random vs sequential gap, yet most DB engines still optimize for throughput instead of latency variance and write amplification. That mismatch is the interesting part
I'm a little bit surprised enterprise isn't sticking to optane for this. It's EoL tech at this point, but it'll still smoke top of the line nvmes for small Q1 which I'd think you'd want for some databases.
Postgres allows you to choose a different page size (at initdb time? At compile time?). The default is 8K. I've always wondered if 32K wouldn't be a better value, and this article points in the same direction.
On the other hand, smaller pages mean that more pages can fit in your CPU cache. Since CPU speed has improved much more than memory bus speed, and since cache is a scarce resource, it is important to use your cache lines as efficiently as possible.
Ultimately, it's a trade-off: larger pages mean faster I/O, while smaller pages mean better CPU utilisation.
Back in college (for me the 80s), I learned that storing table data in rows would greatly increase performance due to high seek times on hard disks. SELECT * FROM table WHERE ... could read in the entire row in a single seek. This was very valuable when your table has 100 columns.
However; a different query (e.g. SELECT name, phone_number FROM table) might result in fewer seeks if the data is stored by column instead of by row.
The article only seems to address data structures with respect to indexes, and not for the actual table data itself.
It may be worth pointing out, current highest capacity EDSFF drive offers ~8PB in 1U. That is 320PB per rack, and current roadmaps in 10 years time up to 1000+ PB or 1EB per rack.
Design Database for SSD would still go a very very long way before what I think the author is suggesting which is designing for cloud or datacenter.
Please give a try to dbzero. It eliminates the database from the developer's stack completely - by replacing a database with the DISTIC memory model (durable, infinite, shared, transactional, isolated, composable). It's build for the SSD/NVME drive era.
And CedarDB https://cedardb.com/ the more commercialized product that is following up on some of this research, including employing many of the key researchers.
SSDs are more of a black box per se. FTL adds another layer of indirection and they are mostly proprietary and vendor-specific. So the performance of SSDs are not generalizable.
We care about wear to the extent we can get the expected 5 years out of SSDs as a capital asset, but below that threshold it doesn't really matter to us.
This made sense for product catalogs, employee dept and e-commerce type of use cases.
But it's an extremely poor fit for storing a world model that LLMs are building in an opaque and probabilistic way.
Prediction: a new data model will take over in the next 5 years. It might use some principles from many decades of relational DBs, but will also be different in fundamental ways.
I’m a bit disappointed the article doesn’t mention Aerospike. It’s not a rdbms but a kvdb commonly used in adtech, and extremely performant on that use case. Anyway, it’s actually designed for ssds, which makes it possible to persist all writes even when the nic is saturated with write operations. Of course the aggregated bandwidth of the attached ssd hardware needs to be faster than the throughput of the nic, but not much, there’s very little overhead in the software.
How does that work? Is that an open source solution like the ZCRX stuff with io uring or does it require proprietary hardware setups? I'm hopeful that the open source solutions today are competitive.
I was familiar with Solarflare and Mellanox zero copy setups in a previous fintech role, but at that time it all relied on black boxes (specifically out of tree kernel modules, delivered as blobs without DKMS or equivalent support, a real headache to live with) that didn't always work perfectly, it was pretty frustrating overall because the customer paying the bill (rightfully) had less than zero tolerance for performance fluctuations. And fluctuations were annoyingly common, despite my best efforts (dedicating a core to IRQ handling, bringing up the kernel masked to another core, then pinning the user space workloads to specific cores and stuff like that) It was quite an extreme setup, GPS disciplined oscillator with millimetre perfect antenna wiring for the NTP setup etc we built two identical setups one in Hong Kong and one in new york. Ah very good fun overall but frustrating because of stack immaturity at that time.
but... but... SSD/MVMes are not really block devices. Not wrangling them into a block device interface but using the full set of features can already yield major improvements. Two examples: metadata and indexes need smaller granularities compared to data and an NVMe can do this quite naturally. Another example is that the data can be sent directly from the device to the network, without the CPU being involved.
Unpopular Opinion: Database were designed for 1980-90 mechanics, the only thing that never innovates is DB. It still use BTree/LSM tree that were optimized for spinning disc. Inefficiency is masked by hardware innovation and speed (Moores Law).
There's plenty of innovation in DB storage tech, but the hardware interface itself is still page-based.
It turns out that btrees are still efficient for this work. At least until the hardware vendors deign to give us an interface to SSD that looks more like RAM.
In the meantime with RAM prices sky rocketing, work and research in buffer & page management for greater-than-main-memory-sized DBs is set to be Hot Stuff again.
Optimising hardware to run existing software is how you sell your hardware.
The amount of performance you can extract from a modern CPU if you really start optimising cache access patterns is astounding
High performance networking is another area like this. High performance NICs still go to great lengths to provide a BSD socket experience to devs. You can still get 80-90% of the performance advantages of kernel bypass without abandoning that model.
cornholio|2 months ago
> Commit-to-disk on a single system is both unnecessary (because we can replicate across storage on multiple systems) and inadequate (because we don’t want to lose writes even if a single system fails).
This is surely true for certain use cases, say financial applications which must guarantee 100% uptime, but I'd argue the vast, vast majority of applications are perfectly ok with local commit and rapid recovery from remote logs and replicas. The point is, the cloud won't give you that distributed consistency for free, you will pay for it both in money and complexity that in practice will lock you in to a specific cloud vendor.
I.e, make cloud and hosting services impossible to commoditize by the database vendors, which is exactly the point.
amluto|2 months ago
- A modern high end SSD commits faster than the one way time to anywhere much farther than a few miles away. (Do the math. A few tens of microseconds specified write latency is pretty common. NVDIMMs (a sadly dying technology) can do even better. The speed of light is only so fast.
- Unfortunate local correlated failures happen. IMO it’s quite nice to be able to boot up your machine / rack / datacenters and have your data there.
- Not everyone runs something on the scale of S3 or EBS. Those systems are awesome, but they are (a) exceedingly complex and (b) really very slow compared to SSDs. If I’m going to run an active/standby or active/active system with, say, two locations, I will flush to disk in both locations.
beAbU|2 months ago
It's very likely that he was part of the team that invented EC2.
pragmatic|2 months ago
wpietri|2 months ago
saghm|2 months ago
mrkeen|2 months ago
Overall speed is irrelevant, what mattered was the relative speed difference between sequential and random access.
And since there's still a massive difference between sequential and random access with SSDs, I doubt the overall approach of using buffers needs to be reconsidered.
crazygringo|2 months ago
Edit: thank you for all the answers -- very educational, TIL!
Lwerewolf|2 months ago
vegabook|2 months ago
zokier|2 months ago
einpoklum|2 months ago
koverstreet|2 months ago
lazide|2 months ago
nextaccountic|2 months ago
anything like this, but for postgres?
actually, is it even possible to write a new db engine for postgres? like mysql has innodb, myisam, etc
pmontra|2 months ago
> Companies are global, businesses are 24/7
Only a few companies are global, so only a few of them should optimize for those kind of workload. However maybe every startup in SV must aim to becoming global, so probably that's what most of them must optimize for, even the ones that eventually fail to get traction.
24/7 is different because even the customers of local companies, even B2B ones, mighty feel like doing some work at midnight once in a while. They'll be disappointed to find the server down.
evanelias|2 months ago
A massive number of companies have global customers, regardless of where the company itself has employees.
For example my b2b business is relatively tiny, yet my customer base spans four continents. Or six continents if you count free users!
ljosifov|2 months ago
londons_explore|2 months ago
Due to the interface between SSD and host OS being block based, you are forced to write a full 4k page. Which means you really still benefit from a write ahead log to batch together all those changes, at least up to page size, if not larger.
Sesse__|2 months ago
If you want to get some sort of sub-block batching, you need a structure that isn't random in the first place, for instance an LSM (where you write all of your changes sequentially to a log and then do compaction later)—and then solve your durability in some other way.
formerly_proven|2 months ago
mythoughtsexact|2 months ago
Use a damned log/journal table if you absolutely must, but make sure it's a fire away and forget non-transactional insert because you morons keep writing software that requires locking the whole users table and that causes huge performance issues.
esperent|2 months ago
PunchyHamster|2 months ago
And then a bug crashes your database cluster all at once and now instead of missing seconds, you miss minutes, because some smartass thought "surely if I send request to 5 nodes some of that will land on disk in reasonably near future?".
I love how this industry invents best practices that are actually good then people just invent badly researched reasons to just... not do them.
dist1ll|2 months ago
That would be asynchronous replication. But IIUC the author is instead advocating for a distributed log with synchronous quorum writes.
sreekanth850|2 months ago
lazide|2 months ago
Frankly, it’s shocking anything works at all.
grogers|2 months ago
exabrial|2 months ago
If you believe this, then what you want already exists. For example: MySQL has in memory tables, but also this design pretty much sounds like NDB.
I don’t think I’d build a database the way they are describing for anything serious. Maybe a social network or other unimportant app where the consequences of losing data aren’t really a big deal.
firesteelrain|2 months ago
Havoc|2 months ago
hyperman1|2 months ago
taffer|2 months ago
Ultimately, it's a trade-off: larger pages mean faster I/O, while smaller pages mean better CPU utilisation.
didgetmaster|2 months ago
However; a different query (e.g. SELECT name, phone_number FROM table) might result in fewer seeks if the data is stored by column instead of by row.
The article only seems to address data structures with respect to indexes, and not for the actual table data itself.
ksec|2 months ago
Design Database for SSD would still go a very very long way before what I think the author is suggesting which is designing for cloud or datacenter.
dbzero|2 months ago
danielfalbo|2 months ago
[1] https://www.dr-josiah.com/2010/08/databases-on-ssds-initial-...
sscdotopen|2 months ago
https://db.in.tum.de/~freitag/papers/p29-neumann-cidr20.pdf
cmrdporcupine|2 months ago
And CedarDB https://cedardb.com/ the more commercialized product that is following up on some of this research, including employing many of the key researchers.
gethly|2 months ago
So, essentially just CQRS, which is usually handled in the application level with event sourcing and similar techniques.
ritcgab|2 months ago
dist1ll|2 months ago
raggi|2 months ago
joek1301|2 months ago
loeg|2 months ago
adsharma|2 months ago
This made sense for product catalogs, employee dept and e-commerce type of use cases.
But it's an extremely poor fit for storing a world model that LLMs are building in an opaque and probabilistic way.
Prediction: a new data model will take over in the next 5 years. It might use some principles from many decades of relational DBs, but will also be different in fundamental ways.
ghqqwwee|2 months ago
CraigJPerry|2 months ago
I was familiar with Solarflare and Mellanox zero copy setups in a previous fintech role, but at that time it all relied on black boxes (specifically out of tree kernel modules, delivered as blobs without DKMS or equivalent support, a real headache to live with) that didn't always work perfectly, it was pretty frustrating overall because the customer paying the bill (rightfully) had less than zero tolerance for performance fluctuations. And fluctuations were annoyingly common, despite my best efforts (dedicating a core to IRQ handling, bringing up the kernel masked to another core, then pinning the user space workloads to specific cores and stuff like that) It was quite an extreme setup, GPS disciplined oscillator with millimetre perfect antenna wiring for the NTP setup etc we built two identical setups one in Hong Kong and one in new york. Ah very good fun overall but frustrating because of stack immaturity at that time.
toolslive|2 months ago
sreekanth850|2 months ago
cmrdporcupine|2 months ago
It turns out that btrees are still efficient for this work. At least until the hardware vendors deign to give us an interface to SSD that looks more like RAM.
Reading over https://www.cs.cit.tum.de/dis/research/leanstore/ and associated papers and follow up work is recommended.
In the meantime with RAM prices sky rocketing, work and research in buffer & page management for greater-than-main-memory-sized DBs is set to be Hot Stuff again.
I like working in this area.
nly|2 months ago
The amount of performance you can extract from a modern CPU if you really start optimising cache access patterns is astounding
High performance networking is another area like this. High performance NICs still go to great lengths to provide a BSD socket experience to devs. You can still get 80-90% of the performance advantages of kernel bypass without abandoning that model.
javaunsafe2019|2 months ago
Rakshath_1|2 months ago
[deleted]
Rakshath_1|2 months ago
[deleted]