- EBS costs for allocation
- EBS is slow at restores from snapshot (faster to spin up a database from a Postgres backup stored in S3 than from an EBS snapshot in S3)
- EBS only lets you attach 24 volumes per instance
- EBS only lets you resize once every 6–24 hours, you can't shrink or adjust continuously
- Detaching and reattaching EBS volumes can take 10s for healthy volumes to 20m for failed ones, so failover takes longer
Why all this matters:
- their AI agents are all ephemeral snapshots; they constantly destroy and rebuild EBS volumes
What didn't work:
- local NVMe/bare metal: need 2-3x nodes for durability, too expensive; snapshot restores are too slow
- custom page-server psql storage architecture: too complex/expensive to maintain
Their solution:
- block COWs
- volume changes (new/snapshot/delete) are a metadata change
- storage space is logical (effectively infinite) not bound to disk primitives
- multi-tenant by default
- versioned, replicated k/v transactions, horizontally scalable
- independent service layer abstracts blocks into volumes, is the security/tenant boundary, enforces limits
- user-space block device, pins i/o queues to cpus, supports zero-copy, resizing; depends on Linux primitives for performance limits
Note that those numbers are terrible vs. a physical disk, especially latency, which should be < 1ms read, << 1ms write.
(That assumes async replication of the write ahead log to a secondary. Otherwise, write latency should be ~ 1 rtt, which is still << 5ms.)
Stacking storage like this isn’t great, but PG wasn’t really designed for performance or HA. (I don’t have a better concrete solution for ansi SQL that works today.)
Reminds me of about ten years ago when a large media customer was running NetApp on cloud to get most of what you just wrote on AWS (because EBS features sucked/sucks very bad and are also crazy expensive).
I did not set that up myself, but the colleague that worked on that told me that enabling tcp multipath for iscsi yielded significant performance gains.
> Detaching and reattaching EBS volumes can take 10s for healthy volumes to 20m for failed ones
Is there a source for the 20m time limit for failed EBS volumes? I experienced this at work for the first time recently but couldn't find anything documenting the 20m SLA (and it did take just about 20 full minutes).
The 5ms write latency and 1ms write latency sounds like they are using S3 to store and retrieve data with some local cache. My guess is a S3 based block storage exposed as a network block device. S3 supports compare-and-swap operations (Put-If-Match), so you can do a copy-on-write scenario quite easily. May be somebody from TigerData can give a little bit more insight into this. I know slatedb supports S3 as a backend for their key-value store. We can build a block device abstraction using that.
I'm really sad to see them waste the opportunity and instead build an nth managed cloud on top of AWS, chasing buzzword after buzzword.
Had they made deals with cloud providers to offer managed TimescaleDB so they can focus on their core value proposition they could have won the timeseries business, but ClickHouse made them irrelevant and Neon already has won the "Postgres for agents" business thanks to a better architecture than this.
There's some secret sauce there I don't know if I'm allowed to talk about yet, so I'll just address the existing tech that we didn't use: most things either didn't have a good enough license, cost too much, would take a TON of ramp-up and expertise we don't currently have to manage and maintain, but generally speaking, our stuff allows us to fully control it.
Entirely programmable storage so far has allowed us to try a few different things to try and make things efficient and give us the features we want. We've been able to try different dedup methods, copy-on-write styles, different compression methods and types, different sharding strategies... All just as a start. We can easily and quickly create a new experimental storage backends and see exactly how pg performs with it side-by-side with other backends.
We're a kubernetes shop, and we have our own CSI plugin, so we can also transparently run a pg HA pair with one pg server using EBS and the other running in our new storage layer, and easily bounce between storage types with nothing but a switchover event.
I was struck by how similar this seems to Ceph/RADOS/RBD. I.e. how they implemented snapshotted block storage on top, sounds more or less exactly the same as how RBD is implemented on top of RADOS in ceph.
One of the problems with Ceph is that it doesn't operate at the highest possible throughput or the lowest possible latency point.
DAOS seemed promising a couple of years ago. But in terms of popularity it seems to be stuck. No Ubuntu packages, no wide spread deployment, Optane got killed.
Yet the NVMe + metadata approach seemed promising.
Would love to see more databases fork it to do what you need from it.
Or if folks have looked at it and decided not to do it, an analysis of why would be super interesting.
EC2 instances have dedicated throughput to EBS via Nitro that you lose out on when you run your own EBS equivalent over the regular network. You only get 5Gbps maximum between two EC2 instances in the same AZ that aren't in the same placement group[1], and you're limited by the instance type's general networking throughput. Dedicated throughput to EBS from a typical EC2 instance is multiple times this figure. It's an interesting tradeoff--I assume they must be IOPS-heavy and the throughput is not a concern.
That 5Gbps limit is per flow (e.g. TCP connection), not per instance pair. With enough concurrent flows, you can saturate the interface bandwidth between peers, even if it’s 200Gbps or more.
I believe this is also changing with instances that now allow you to adjust the ratio of throughput on the NIC that's dedicated to EBS vs. general network traffic (with the intention, I'm sure, that people would want more EBS throughput than the default).
If anyone is interested in reading about a similar ”local-NVMe made redundant & shared over network as block devices” engine, last year I did some testing of Silk’s cloud block storage solution (1.3M x 8kB IOPS and 20 GiB/s throughput when reading the block store from a single GCP VM). They’re using iSCSI with multipathing on the client side instead of a userspace driver:
IUUC they built a EBS replacement on top of NVME attached to a dynamically sized fleet of EC2 instances.
The advantage is that it’s allocating pages on demand from an elastic pool of storage so it appears as an infinite block device. Another advantage is cheap COW clones.
The downside is (probably) specialized tuning for Postgres access patterns. I shudder to think what went into page metadata management. Perhaps it’s similar to e.g. SQL Server buffer pool manager).
It’s not clear to me why it’s better than Aurora design - on the surface page servers are higher level concepts and should allow more holistic optimizations (and less page write traffic due to shipping log in lieu of whole pages). Is also not clear what stopped Amazon from doing the same (perhaps EBS serving more diverse access patterns?).
If you are targeting customers on AWS, don’t challenge EBS, because it is a losing game to begin with. There are 100 ways for AWS to optimize but none of them are available to you.
“The storage device driver exposes Fluid Storage volumes as standard Linux block devices mountable with filesystems such as ext4 or xfs. It...allows volumes to be resized dynamically while online.”
Yet an xfs file system cannot be shrunk at all, and an ext4 filesystem cannot be shrunk without first unmounting it.
Are you simply doing thin provisioning of these volumes, so they appear to be massive but aren’t really? I see later that you say you account for storage based on actual consumption.
We just launched a bunch around “Postgres for Agents” [0]:
forkable databases, an MCP server for Postgres (with semantic + full-text search over the PG docs), a new BM25 text search extension (pg_textsearch), pgvectorscale updates, and a free tier.
Hard to say if the above comment is serious or sarcastic.
To my eye, seeing "Agentic Postgres" at the top of the page, in yellow, is not persuasive; it comes across as bandwagony. (About me: I try to be open but critical about new tech developments; I try out various agentic tooling often.).
But I'm not dismissing the product. I'm just saying this part is what I found persuasive:
> Agents spin up environments, test code, and evolve systems continuously. They need storage that can do the same: forking, scaling, and provisioning instantly, without manual work or waste.
That explains it clearly in my opinion.
* Seems to me, there are taglines that only work after someone in "on-board". I think "Agentic Postgres" is that kind of tagline. I don't have a better suggestion in mind at the moment, though, sorry.
Are they not using aws anymore? I found that confusing. It says they're not using ebs, not using attached nvme, but I didn't think there were other options in aws?
Tiger Cloud certainly continues to run on AWS. We have built it to rely on fairly low-level AWS primitives like EC2, EBS, and S3 (as opposed to some of the higher-level service offerings).
Our existing Postgres fleet, which uses EBS for storage, still serves thousands of customers today; nothing has changed there.
What’s new is Fluid Storage, our disaggregated storage layer that currently powers the new free tier (while in beta). In this architecture, the compute nodes running Postgres still access block storage over the network. But instead of that being AWS EBS, it’s our own distributed storage system.
From a hardware standpoint, the servers that make up the Fluid Storage layer are standard EC2 instances with fast local disks.
So they've built a competitor to EBS that runs on EC2 and nvme. Seems like their prices will need to be much higher than those of AWS to get decent profit margins. I really hate being in the high-cost ecosystem of the large cloud providers, so I wouldn't make use of this.
0xbadcafebee|4 months ago
Why EBS didn't work:
Why all this matters: What didn't work: Their solution: Performance stats (single volume):jread|4 months ago
EBS volume attachment is typically ~11s for GP2/GP3 and ~20-25s for other types.
1ms read / 5ms write latencies seem high for 4k blocks. IO1/IO2 is typically ~0.5ms RW, and GP2/GP3 ~0.6ms read and ~0.94ms write.
References: https://cloudlooking.glass/matrix/#aws.ebs.us-east-1--cp--at... https://cloudlooking.glass/matrix/#aws.ebs.*--dp--rand-*&aws...
hedora|4 months ago
Note that those numbers are terrible vs. a physical disk, especially latency, which should be < 1ms read, << 1ms write.
(That assumes async replication of the write ahead log to a secondary. Otherwise, write latency should be ~ 1 rtt, which is still << 5ms.)
Stacking storage like this isn’t great, but PG wasn’t really designed for performance or HA. (I don’t have a better concrete solution for ansi SQL that works today.)
znpy|4 months ago
I did not set that up myself, but the colleague that worked on that told me that enabling tcp multipath for iscsi yielded significant performance gains.
_rs|4 months ago
Is there a source for the 20m time limit for failed EBS volumes? I experienced this at work for the first time recently but couldn't find anything documenting the 20m SLA (and it did take just about 20 full minutes).
lisperforlife|4 months ago
bradyd|4 months ago
Is that even true? I've resized an EBS instance a few minutes after another resize before.
thesz|4 months ago
It is used in first line of the text but no explanation was given.
samat|4 months ago
unsolved73|4 months ago
I'm really sad to see them waste the opportunity and instead build an nth managed cloud on top of AWS, chasing buzzword after buzzword.
Had they made deals with cloud providers to offer managed TimescaleDB so they can focus on their core value proposition they could have won the timeseries business, but ClickHouse made them irrelevant and Neon already has won the "Postgres for agents" business thanks to a better architecture than this.
akulkarni|4 months ago
We think we're still building great things, and our customers seem to agree.
Usage is at an all-time high, revenue is at an all-time high, and we’re having more fun than ever.
Hopefully we’ll win you back soon.
stefanha|4 months ago
Also, were existing network or distributed file systems not suitable? This use case sounds like Ceph might fit, for example.
graveland|4 months ago
Entirely programmable storage so far has allowed us to try a few different things to try and make things efficient and give us the features we want. We've been able to try different dedup methods, copy-on-write styles, different compression methods and types, different sharding strategies... All just as a start. We can easily and quickly create a new experimental storage backends and see exactly how pg performs with it side-by-side with other backends.
We're a kubernetes shop, and we have our own CSI plugin, so we can also transparently run a pg HA pair with one pg server using EBS and the other running in our new storage layer, and easily bounce between storage types with nothing but a switchover event.
kjetijor|4 months ago
adsharma|4 months ago
DAOS seemed promising a couple of years ago. But in terms of popularity it seems to be stuck. No Ubuntu packages, no wide spread deployment, Optane got killed.
Yet the NVMe + metadata approach seemed promising.
Would love to see more databases fork it to do what you need from it.
Or if folks have looked at it and decided not to do it, an analysis of why would be super interesting.
electroly|4 months ago
[1] https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-inst...
otterley|4 months ago
huntaub|4 months ago
sgichohi|4 months ago
[deleted]
tanelpoder|4 months ago
https://tanelpoder.com/posts/testing-the-silk-platform-in-20...
sgichohi|4 months ago
[deleted]
maherbeg|4 months ago
It's a great way to mix copy on write and effectively logical splitting of physical nodes. It's something I've wanted to build at a previous role.
7e|4 months ago
DenisM|4 months ago
The advantage is that it’s allocating pages on demand from an elastic pool of storage so it appears as an infinite block device. Another advantage is cheap COW clones.
The downside is (probably) specialized tuning for Postgres access patterns. I shudder to think what went into page metadata management. Perhaps it’s similar to e.g. SQL Server buffer pool manager).
It’s not clear to me why it’s better than Aurora design - on the surface page servers are higher level concepts and should allow more holistic optimizations (and less page write traffic due to shipping log in lieu of whole pages). Is also not clear what stopped Amazon from doing the same (perhaps EBS serving more diverse access patterns?).
Very cool!
up2isomorphism|4 months ago
the8472|4 months ago
antonkochubey|4 months ago
E.g. Micron 7450 PRO 3.84 TB - IOPS 4K 735k lesend, 160k schreibend
everfrustrated|4 months ago
otterley|4 months ago
“The storage device driver exposes Fluid Storage volumes as standard Linux block devices mountable with filesystems such as ext4 or xfs. It...allows volumes to be resized dynamically while online.”
Yet an xfs file system cannot be shrunk at all, and an ext4 filesystem cannot be shrunk without first unmounting it.
Are you simply doing thin provisioning of these volumes, so they appear to be massive but aren’t really? I see later that you say you account for storage based on actual consumption.
namibj|4 months ago
They can be used with, for example, the listed file systems.
No one claimed the listed file systems would (usefully) cooperate with (all aspects of) the block device's resizing.
thr0w|4 months ago
akulkarni|4 months ago
We just launched a bunch around “Postgres for Agents” [0]:
forkable databases, an MCP server for Postgres (with semantic + full-text search over the PG docs), a new BM25 text search extension (pg_textsearch), pgvectorscale updates, and a free tier.
[0] https://www.tigerdata.com/blog/postgres-for-agents
jacobsenscott|4 months ago
xpe|4 months ago
To my eye, seeing "Agentic Postgres" at the top of the page, in yellow, is not persuasive; it comes across as bandwagony. (About me: I try to be open but critical about new tech developments; I try out various agentic tooling often.).
But I'm not dismissing the product. I'm just saying this part is what I found persuasive:
> Agents spin up environments, test code, and evolve systems continuously. They need storage that can do the same: forking, scaling, and provisioning instantly, without manual work or waste.
That explains it clearly in my opinion.
* Seems to me, there are taglines that only work after someone in "on-board". I think "Agentic Postgres" is that kind of tagline. I don't have a better suggestion in mind at the moment, though, sorry.
tayo42|4 months ago
mfreed|4 months ago
Our existing Postgres fleet, which uses EBS for storage, still serves thousands of customers today; nothing has changed there.
What’s new is Fluid Storage, our disaggregated storage layer that currently powers the new free tier (while in beta). In this architecture, the compute nodes running Postgres still access block storage over the network. But instead of that being AWS EBS, it’s our own distributed storage system.
From a hardware standpoint, the servers that make up the Fluid Storage layer are standard EC2 instances with fast local disks.
wrs|4 months ago
runako|4 months ago
I'm curious whether you evaluated solutions like zfs/Gluster? Also curious whether you looked at Oracle Cloud given their faster block storage?
kristianp|4 months ago
cpt100|4 months ago