top | item 43355031

IO Devices and Latency

443 points| milar | 11 months ago |planetscale.com

153 comments

order

bddicken|11 months ago

Author of the blog here. I had a great time writing this. By far the most complex article I've ever put together, with literally thousands of lines of js to build out these interactive visuals. I hope everyone enjoys.

tealpod|11 months ago

Your style of explaination and animation are exceptional.

jasonthorsness|11 months ago

The visuals are awesome; the bouncing-box is probably the best illustration of relative latency I've seen.

Your "1 in a million" comment on durability is certainly too pessimistic once you consider the briefness of the downtime before a new server comes in and re-replicates everything, right? I would think if your recovery is 10 minutes for example, even if each of three servers is guaranteed to fail once in the month, I think it's already like 1 in two million? and if it's a 1% chance of failure in the month failure of all three overlapping becomes extremely unlikely.

Thought I would note this because one-in-a-million is not great if you have a million customers ;)

b0rbb|11 months ago

The animations are fantastic and awesome job with the interactivity. I find myself having to explain latency to folks often in my work and being able to see the extreme difference in latencies for something like a HDD vs SSD makes it much easier to understand for some people.

Edit: And for real, fantastic work, this is awesome.

zalebz|11 months ago

The level of your effort really shows through. If you had to ballpark guess, how much time do you think you put in? and I realize keyboard time vs kicking around in your head time are quite different

dormando|11 months ago

Half on topic: what libs/etc did you use for the animations? Not immediately obvious from the source page.

(it's a topic I'm deeply familiar with so I don't have a comment on the content, it looks great on a skim!) - but I've been sketching animations for my own blog and not liked the last few libs I tried.

Thanks!

anymouse123456|11 months ago

I love this kind of datavis.

We are generally bad at internalizing comparisons at these scales. The visualizations make a huge difference in building more detailed intuitions.

Really nice work, thank you!

hakaneskici|11 months ago

Great work! Thank you for making this.

This is beautiful and brilliant, and also is a great visual tool to explain how some of the fundamental algorithms and data structures originate from the physical characteristics of storage mediums.

I wonder if anyone remembers the old days where you programmed your own custom defrag util to place your boot libs and frequently used apps to the outer tracks of the hard drive, so they are loaded faster due to the higher linear velocity of the outermost track :)

AlphaWeaver|11 months ago

Were you at all inspired by the work of Bartosz Ciechanowski? My first thought was that you all might have hired him to do the visuals for this post :)

hodgesrm|11 months ago

I was delighted to see your models of tape operations as I used it a lot in the COBOL days.

For reasons discussed in your article we would arrange tape processing as much as possible in sequential scans, something at which COBOL was quite excellent. One of the common performance problems was when there was a mismatch between a slower COBOL processing speed that could not keep up with the flow of blocks coming off the drive head.

In this case you would see the drive start to overshoot as it read more blocks than the COBOL program could handle. The drive would begin a painful jump forward/spool backward motion which made the performance issue quite visible. You would then eyeball the code to understand way the program was not keeping up, correct, and resubmit until the motion disappeared.

logsr|11 months ago

Amazing presentation. It really helps to understand the concepts.

The only add is that it understates the impact of SSD parallelism. 8 Channel controllers are typical for high end devices and 4K random IOPS continue to scale with queue depth, but for an introduction the example is probably complex enough.

It is great to see PlanetScale moving in this direction and sharing the knowledge.

tombert|11 months ago

The visualizations are excellent, very fun to look at and play with, and they go along with the article extremely well. You should be proud of this, I really enjoyed it.

layer8|11 months ago

I don’t see any animations on Safari. Also, I’d much prefer a variable-width font, monospace prose is hard to read. While I can use Reader Mode, that removes the text coloring, and would likely also hide the visuals (if they were visible in the first place).

inetknght|11 months ago

I don't see a single visual. I don't use the web with javascript. Why not embed static images instead or in addition?

bob1029|11 months ago

I've been advocating for SQLite+NVMe for a while now. For me it is a new kind of pattern you can apply to get much further into trouble than usual. In some cases, you might actually make it out to the other side without needing to scale horizontally.

Latency is king in all performance matters. Especially in those where items must be processed serially. Running SQLite on NVMe provides a latency advantage that no other provider can offer. I don't think running in memory is even a substantial uplift over NVMe persistence for most real world use cases.

crazygringo|11 months ago

> I've been advocating for SQLite+NVMe for a while now.

Why SQLite instead of a traditional client-server database like Postgres? Maybe it's a smidge faster on a single host, but you're just making it harder for yourself the moment you have 2 webservers instead of 1, and both need to write to the database.

> Latency is king in all performance matters.

This seems misleading. First of all, your performance doesn't matter if you don't have consistency, which is what you now have to figure out the moment you have multiple webservers. And secondly, database latency is generally miniscule compared to internet round-trip latency, which itself is miniscule compared to the "latency" of waiting for all page assets to load like images and code libraries.

> Especially in those where items must be processed serially.

You should be avoiding serial database queries as much as possible in the first place. You should be using joins whenever possible instead of separate queries, and whenever not possible you should be issuing queries asynchronously at once as much as possible, so they execute in parallel.

dangoodmanUT|11 months ago

The SQLite filesystem is laid out to hedge against HDD defragging. It wouldn't benefit as much as changing it to a more modern layout that's SSD-native, then using NVMe

cynicalsecurity|11 months ago

Sqlite doesn't work super well with parallelism in writing. It supports it, yes, but in a bit clunky way and it still can fail. To avoid problems with parallel writing besides setting a specific clunky mode of operations a trick of using a single thread for writing in an app can be used. Which usually makes the already complicated parallel code slightly more complicated.

If only one thread of writing is required, then SQLite works absolutely great.

jstimpfle|11 months ago

I still measure 1-2ms of latency with an NVMe disk on my Desktop computer, doing fsync() on a file on a ext4 filesystem.

Update: about 800us on a more modern system.

sergiotapia|11 months ago

I had a lot of fun with Coolify running my app and my database on the same machine. It was pretty cool to see zero latency in my SQL queries, just the cost of the engine.

magicmicah85|11 months ago

Can I just say that I love how informative this was that I completely forgot it was to promote a product? Excellent visuals and interactivity.

robotguy|11 months ago

Seeing the disk IO animation reminded me of Melvin Kaye[0]:

  Mel never wrote time-delay loops, either, even when the balky Flexowriter
  required a delay between output characters to work right.
  He just located instructions on the drum
  so each successive one was just past the read head when it was needed;
  the drum had to execute another complete revolution to find the next instruction.
  
[0] https://pages.cs.wisc.edu/~markhill/cs354/Fall2008/notes/The...

Thoreandan|11 months ago

I was reminded of Mel as well! If you haven't seen it, Usagi Electric on YouTube has gotten a drum-memory system from the 1950s nearly fully-functional again.

jhgg|11 months ago

Metal looks super cool, however at my last job when we tried using instance local SSD's on GCP, there were serious reliability issues (e.g. blocks on the device losing data). Has this situation changed? What machine types are you using?

Our workaround was this: https://discord.com/blog/how-discord-supercharges-network-di...

rcrowley|11 months ago

Neat workaround! We only started working with GCP Local SSDs in 2024 and can report we haven't experienced read or write failures due to bad sectors in any of our testing.

That said, we're running a redundant system in which MySQL semi-sync replication ensures every write is durable to two machines, each in a different availability zone, before that write's acknowledged to the client. And our Kubernetes operator plus Vitess' vtorc process are working together to aggressively detect and replace failed or even suspicious replicas.

In GCP we find the best results on n2d-highmem machines. In AWS, though, we run on pretty much all the latest-generation types with instance storage.

gz09|11 months ago

Nice blog. There is also a problem that generally cloud storage is "just unusually slow" (this has been noted by others before, but here is a nice summary of the problem http://databasearchitects.blogspot.com/2024/02/ssds-have-bec...)

Having recently added support for storing our incremental indexes in https://github.com/feldera/feldera on S3/object storage (we had NVMe for longer due to obvious performance advantages mentioned in the previous article), we'd be happy for someone to disrupt this space with a better offering ;).

bddicken|11 months ago

That database architects blog is a great read.

__turbobrew__|11 months ago

I think something about distributed storage which is not appreciated in this article:

1. Some systems do not support replication out of the box. Sure your cassandra cluster and mysql can do master slave replication, but lots of systems cannot.

2. Your life becomes much harder with NVME storage in cloud as you need to respect maintenance intervals and cloud initiated drains. If you do not hook into those system and drain your data to a different node, the data goes poof. Separating storage from compute allows the cloud operator to drain and move around compute as needed and since the data is independent from the compute — and the cloud operator manages that data system and draining for that system as well — the operator can manage workload placements without the customer needing to be involved.

rcrowley|11 months ago

Good points. PlanetScale's durability and reliability are built on replication - MySQL replication - and all the operational software we've written to maintain replication in the face of servers coming and going, network partitions, and all the rest of the weather one faces in the cloud.

Replicated network-attached storage that presents a "local" filesystem API is a powerful way to create durability in a system that doesn't build it in like we have.

392|11 months ago

This is where s2.dev could in theory come to the rescue. Able to keep up with the streaming bandwidth, but durable.

wmf|11 months ago

I assume DRBD still exists although it's certainly easier to use EBS.

maayank|11 months ago

what do you mean by drains?

CSDude|11 months ago

For years, I just didn't get why replicated databases always stick with EBS and deal with its latency. Like, replication is already there, why not be brave and just go with local disks? At my previous orgs, where we ran Elasticsearch for temporary logs/metrics storage, I proposed we do exactly that since we didn't even have major reliability requirements. But I couldn't convince them back then, we ended up with even worse AWS Elasticsearch.

I get that local disks are finite, yeah, but I think the core/memory/disk ratio would be good enough for most use cases, no? There are plenty of local disk instances with different ratios as well, so I think a good balance could be found. You could even use local hard disk ones with 20TB+ disks for implementing hot/cold storage.

Big kudos to the PlanetScale team, they're like, finally doing what makes sense. I mean, even AWS themselves don't run Elasticsearch on local disks! Imagine running ClickHouse, Cassandra, all of that on local disks.

jiggawatts|11 months ago

I looked into this with an idea of running SQL Server Availability Groups on the Azure Las_v3 series VMs, which have terabytes of local SSD.

The main issue was that after a stop-start event, the disks are wiped. SQL Server can’t automatically handle this, even if the rest of the cluster is fine and there are available replicas. It won’t auto repair the node that got reset. The scripting and testing required to work around this would be unsupportable in production for all but the bravest and most competent orgs.

hodgesrm|11 months ago

There are a number of axes of performance that aren't covered in this [wonderful] article on storage performance. One of these is that EBS allows you to scale the VM up / down to change the amount of CPU & RAM available to process data on disk. We run several hundred ClickHouse clusters on this model. Rescaling to address performance issues is far more common than failures.

Example; you get a tenant performance issue on Sunday morning US time. The simplest fix is often rescale to a larger VM for the weekend, then get the A team working on the root cause first thing Monday. The incremental cost is minimal and avoids far more costly staff burnout.

ucarion|11 months ago

Really, really great article. The visualization of random writes is very nicely done.

On:

> Another issue with network-attached storage in the cloud comes in the form of limiting IOPS. Many cloud providers that use this model, including AWS and Google Cloud, limit the amount of IO operations you can send over the wire. [...]

> If instead you have your storage attached directly to your compute instance, there are no artificial limits placed on IO operations. You can read and write as fast as the hardware will allow for.

I feel like this might be a dumb series of questions, but:

1. The ratelimit on "IOPS" is precisely a ratelimit on a particular kind of network traffic, right? Namely traffic to/from an EBS volume? "IOPS" really means "EBS volume network traffic"?

2. Does this save me money? And if yes, is from some weird AWS arbitrage? Or is it more because of an efficiency win from doing less EBS networking?

I see pretty clearly putting storage and compute on the same machine strictly a latency win, because you structurally have one less hop every time. But is it also a throughput-per-dollar win too?

rbranson|11 months ago

> 1. The ratelimit on "IOPS" is precisely a ratelimit on a particular kind of network traffic, right? Namely traffic to/from an EBS volume? "IOPS" really means "EBS volume network traffic"?

The EBS volume itself has a provisioned capacity of IOPS and throughput, and the EC2 instance it's attached to will have its own limits as well across all the EBS volumes attached to it. I would characterize it more like a different model. An EBS volume isn't just just a slice of a physical PCB attached to a PCIe bus, it's a share in a large distributed system a large number of physical drives with its own dedicated network capacity to/from compute, like a SAN.

> 2. Does this save me money? And if yes, is from some weird AWS arbitrage? Or is it more because of an efficiency win from doing less EBS networking?

It might. It's a set of trade-offs.

the8472|11 months ago

For network-attached storage IOPS limits packets per second, not bandwidth, since IO operations can happen at different sizes (e.g. 4K vs. 16K blocks).

_1tem|11 months ago

If this is true, then how do "serverless" database providers like Neon advertise "low latency" access? They use object storage like S3, which I imagine is an order of magnitude worse than networked storage for latency.

edit: apparently they build a kafkaesque layer of caching. No thank you, I'll just keep my data on locally attached NVMe.

hodgesrm|11 months ago

> edit: apparently they build a kafkaesque layer of caching. No thank you, I'll just keep my data on locally attached NVMe.

I can't speak to Neon specifically but I've worked a lot with analytic databases, which often use NVMe SSD caches to operate efficiently on S3 data. For time-ordered datasets like observability (e.g., metrics) most queries go to recent data which in the steady state is not just in NVMe SSD storage but generally RAM as well if you are properly tuned. For example, indexes and other metadata are permanently cached.

In realistic tests of the above scenario the effect of nVME SSD can be surprisingly muted. That's especially true if you can use clusters that spread processing across multiple compute nodes, which gives you more RAM to play with and also multiplies storage bandwith.

There are downsides to S3 of course like restarts, which require management to avoid performance issues.

vessenes|11 months ago

Great nerdbaiting ad. I read all the way to the bottom of it, and bookmarked it to send to my kids if I feel they are not understanding storage architectures properly. :)

bddicken|11 months ago

The nerdbaiting will now provide generational benefit!

pjdesno|11 months ago

I love the visuals, and if it's ok with you will probably link them to my class material on block devices in a week or so.

One small nit: > A typical random read can be performed in 1-3 milliseconds.

Um, no. A 7200 RPM platter completes a rotation in 8.33 milliseconds, so rotational delay for a random read is uniformly distributed between 0 and 8.33ms, i.e. mean 4.16ms.

>a single disk will often have well over 100,000 tracks

By my calculations a Seagate IronWolf 18TB has about 615K tracks per surface given that it has 9 platters and 18 surfaces, and an outer diameter read speed of about 260MB/s. (or 557K tracks/inch given typical inner and outer track diameters)

For more than you ever wanted to know about hard drive performance and the mechanical/geometrical considerations that go into it, see https://www.msstconference.org/MSST-history/2024/Papers/msst...

bddicken|11 months ago

Whoah, thanks for sharing the paper.

jgalt212|11 months ago

Disk latency, and one's aversion to it, is IMHO the only way Hetzner costs can run up on you. You want to keep the database on local disk, and not their very slow attached Volumes (Hetzner EBS). In short, you can have relatively light work-loads that will be on sort of expensive VMs because you need 500GB, or more, of local disk. 1TB local disk is the biggest VM they offer in the US. 300 EUR a month.

rsanheim|11 months ago

That great infographic at the top illustrates one big reason why 'dev instances in the cloud' is a bad idea.

cmurf|11 months ago

Plenty of text but also many cool animations. I'm a sucker for visual aids. It's a good balance.

carderne|11 months ago

I'm always curious about latency for all these newdb offerings like PlanetScale/Neon/Supabase.

It seems like they don't emphasise strongly enough _make sure you colocate your server in the same cloud/az/region/dc as our db. I suspect a large fraction of their users don't realise this, and have loads of server-db traffic happening very slowly over the public internet. It won't take many slow db reads (get session, get a thing, get one more) to trash your server's response latency.

cynicalsecurity|11 months ago

That was a cool advertisement, I must give them that.

anonymousDan|11 months ago

Nice article, but the replicated approach isn't exactly comparing like with like. To achieve the same semantics you'd need to block for a response from the remote backup servers which would end up with the same latency as the other cloud providers...

bloopernova|11 months ago

Fantastic article, well explained and beautiful diagrams. Thank you bddicken for writing this!

bddicken|11 months ago

You are welcome!

SAI_Peregrinus|11 months ago

> The next major breakthrough in storage technology was the hard disk drive.

There were a few storage methods in between tape & HDDs, notably core memory & magnetic drum memory.

samwho|11 months ago

Gosh, this is beautiful. Fantastic work, Ben. <3

gozzoo|11 months ago

Can someeone share their expirience in creating such diagrams. What libraries and tools can be useful for such interactive diagrams?

bddicken|11 months ago

For this particular one I used d3.js, but honestly this isn't really the type of thing it's designed for. I've also used GSAP for this type of thing on this article I wrote about database sharding.

https://planetscale.com/blog/database-sharding

Joel_Mckay|11 months ago

Do you mean something for data visualization, or tricks condensing large data sets with cursors?

https://d3js.org/

Best of luck =3

aftbit|11 months ago

Hrm "unlimited IOPS"? I suppose contrasted against the abysmal IOPS available to Cloud block devs. A good modern NVMe enterprise drive is specced for (order of magnitude) 10^6 to 10^7 IOPS. If you can saturate that from database code, then you've got some interesting problems, but it's definitely not unlimited.

bddicken|11 months ago

Technically any drive has a finite IOPS capacity. We have found that no matter how hard we tried, we could not get MySQL to exhaust the max IOPS of the underlying hardware. You hit CPU limits long before hitting IOPS limits. Thus "infinite IOPS."

r3tr0|11 months ago

We are working on a platform that lets you measure this stuff with pretty high precision in real time.

You can check out our sandbox here:

https://yeet.cx/play

liweixin|11 months ago

Amazing! The visualizations are so great!

dangoodmanUT|11 months ago

what local nvme is getting 20us? Nitro?