- .. but object storage "solves" most of that for you, handing you a set of semantics which work reliably
- single file throughput sucks hilariously badly
- .. because 1Gb is ridiculously large for an atomic unit
- (this whole thing resembles a project I did a decade ago for transactional consistency on TFAT on Flash, except that somehow managed faster commit times despite running on a 400Mhz MIPS CPU. Edit: maybe I should try to remember how that worked and write it up for HN)
- therefore, all of the actual work is shifted to the broker. The broker is just periodically committing its state in case it crashes
- it's not clear whether the broker ACKs requests before they're in durable storage? Is it possible to lose requests in flight anyway?
- there's a great design for a message queue system between multiple nodes that aims for at least once delivery, and has existed for decades, while maintaining high throughput: SMTP. Actually, there's a whole bunch of message queue systems?
> The broker runs a single group commit loop on behalf of all clients, so no one contends for the object. Critically, it doesn't acknowledge a write until the group commit has landed in object storage. No client moves on until its data is durably committed.
The original graph appears to simply show the blocking issue of their previous synchronisation mechanism; 10 min to process an item down to 6 min. Any central system would seem to resolve this for them.
In any organisation its good to make choices for simplicity rather than small optimisations - you're optimising maintenance, incident resolution, and development.
Typically I have a small pg server for these things. It'll work out slightly more expensive than this setup for one action, yet it will cope with so much more - extending to all kinds of other queues and config management - with simple management, off the shelf diagnostics etc.
While the object store is neat, there is a confluence of factors which make it great and simple for this workload, that may not extend to others. 200ms latency is a lot for other workloads, 5GB/s doesn't leave a lot of headroom, etc. And I don't want to be asked to diagnose transient issues with this.
So I'm torn. It's simple to deploy and configure from a fresh deployment PoV. Yet it wouldn't be accepted into any deployment I have worked on.
Yeah, I mean, I think we're all basically doing this now, right? I wouldn't choose this design, but I think something similar to DeltaLake can be simplified down for tons of use cases. Manifest with CAS + buffered objects to S3, maybe compaction if you intend to do lots of reads. It's not hard to put it together.
You can achieve stupidly fast read/write operations if you do this right with a system that is shocking simple to reason about.
> Step 4: queue.json with an HA brokered group commit
> The broker is stateless, so it's easy and inexpensive to move. And if we end up with more than one broker at a time? That's fine: CAS ensures correctness even with two brokers.
TBH this is the part that I think is tricky. Just resolving this in a way that doesn't end up with tons of clients wasting time talking to a broker that buffers their writes, pushes them, then always fails. I solved this at one point with token fencing and then decided it wasn't worth it and I just use a single instance to manage all writes. I'd again point to DeltaLake for the "good" design here, which is to have multiple manifests and only serialize compaction, which also unlocks parallel writers.
The other hard part is data deletion. For the queue it looks deadly simple since it's one file, but if you want to ramp up your scale and get multiple writers or manage indexes (also in S3) then deletion becomes something you have to slip into compaction. Again, I had it at one point and backed it out because it was painful.
But I have 40k writes per second working just fine for my setup, so I'm not worrying. I'd suggest others basically punt as hard as possible on this. If you need more writes, start up a separate index with its own partition for its own separate set of data, or do naive sharding.
A lot of good insights here. I am also wandering if they can just simply put different jobs (unclaimed, in-progress, deleted/done) into different directory/prefix, and rely on atomic object rename primitive [1][2][3] to solve the problem more gracefully (group commit can still be used if needed).
What you describe is very similar to how Icechunk[1] works. It works beautifully for transactional writes to "repos" containing PBs of scientific array data in object storage.
Similar idea but you have the power of S3 scale (if you really need it). For context, I do not work at WS. My company switched to it recently and we've seen great improvements over traditional Kafka.
The usual path an engineer takes is to take a complex and slow system and reengineer it into something simple, fast, and wrong. But as far as I can tell from the description in the blog though, it actually works at scale! This feels like a free lunch and I’m wondering what the tradeoff is.
It seems like this is an approach that trades off scale and performance for operational simplicity. They say they only have 1GB of records and they can use a single committer to handle all requests. Failover happens by missing a compare-and-set so there's probably a second of latency to become leader?
This is not to say it's a bad system, but it's very precisely tailored for their needs. If you look at the original Kafka implementation, for instance, it was also very simple and targeted. As you bolt on more use cases and features you lose the simplicity to try and become all things to all people.
Love this writeup. There's so much interesting stuff you can build on top of Object Storage + compare-and-swap. You learn a lot about distributed systems this way.
I'd love to see a full sample implementation based on s3 + ecs - just to study how it works.
Depending on who hosts your object storage this seems like it could get much more expensive than using a queue table in your database? But I'm also aware that this is a blog post of an object storage company.
We don't have a relational database, otherwise that would work great for a queue! You can imagine us continuing to iterate here to Step 5, Step 6, ... Step N over time. The tradeoff of each step is complexity, and complexity has to be deserved. This is working exceptionally well currently.
This post touches on a realisation I made a while ago, just how far you can get with the guarantees and trade-offs of object storage.
What actually _needs_ to be in the database? I've never gone as far as building a job queue on top of object storage, but have been involved in building surprisingly consistent and reliable systems with object storage.
Does this suffer from ABA problem, or does object storage solve that for you by e.g. refusing to accept writes where content has changed between the read and write?
> refusing to accept writes where content has changed between the read and write?
Right. You can issue a write that will only be accepted if a condition is matched, like the etag of the object matching your expectation. If it doesn't match, your object was invalidated.
It is by the juice of Zig that binaries acquire speed, the allocators acquire ownership, the ownership becomes a warning. It is by typography alone I can now turbopuffer is written in zig.
Assuming you already using object storage in your project, but don't use Redis yet it wouldn't be re-inventing but just avoiding an extra dependency that would only be used by a single feature.
For performance reasons we needed a set of assets on all copies of a service. We were using consul for the task management, which is effectively a tree of data that’s tantamount to a json file (in fact we usually pull trees of data as a json file).
Among other problems I knew the next thing we were going to have to do was autoscaling and the system we had for call and response was a mess from that respect. Unanswered questions were: How do you know when all agents have succeeded, how do you avoid overwriting your peers’ data, and what do you do with agents that existed yesterday and don’t today?
I ended up rewriting all of the state management data so that each field had one writer and one or more readers. It also allowed me to move the last live service call for another service and decommission it. Instead of having a admin service you just called one of the peers at random and elected it leader for the duration of that operation. I also arranged the data so the leader could watch the parent key for the roll call and avoid needing to poll.
Each time a task was created the leader would do a service discovery call to get a headcount and then wait for everyone to set a suggests or failure state. Some of these state transitions were idempotent, so if you reissued a task you didn’t need to delete the old results. Everyone who already completed it would noop, and the ones that failed or the new servers that joined the cluster would finish up. If there was a delete operation later then the data would be purged from the data set and the agents, a subsequent call would be considered new.
Long story short, your CS program should have distributed computing classes because this shit is hard to work out from first principles when you don’t know what the principles even are.
The windows has passed already for this kind of opportunities since there are dozen of people all doing the same thing. Also abusing object storage is not very fun.
Exponential back off with random jitter is the solution there. It can be a bit tricky though because the random jitter can reduce throughput if it’s too generous or not generous enough.
For instance backing off 53 ms is a lot of latency. Backing off 1 μs may still result in a collision, requiring two or three backoffs before it resolves, where a longer pause on the first backoff might have resulted in quicker recovery.
Optimistic locking is a statistical model just like bloom filters are, and scaling something 20 or 100 times higher is a hell of a lot of runway. Especially if the failure mode is a person sharing someone else’s account or using a laptop and tablet at exactly the same time.
pjc50|5 days ago
- concurrency is very hard
- .. but object storage "solves" most of that for you, handing you a set of semantics which work reliably
- single file throughput sucks hilariously badly
- .. because 1Gb is ridiculously large for an atomic unit
- (this whole thing resembles a project I did a decade ago for transactional consistency on TFAT on Flash, except that somehow managed faster commit times despite running on a 400Mhz MIPS CPU. Edit: maybe I should try to remember how that worked and write it up for HN)
- therefore, all of the actual work is shifted to the broker. The broker is just periodically committing its state in case it crashes
- it's not clear whether the broker ACKs requests before they're in durable storage? Is it possible to lose requests in flight anyway?
- there's a great design for a message queue system between multiple nodes that aims for at least once delivery, and has existed for decades, while maintaining high throughput: SMTP. Actually, there's a whole bunch of message queue systems?
jitl|5 days ago
candiddevmike|5 days ago
Normal_gaussian|5 days ago
In any organisation its good to make choices for simplicity rather than small optimisations - you're optimising maintenance, incident resolution, and development.
Typically I have a small pg server for these things. It'll work out slightly more expensive than this setup for one action, yet it will cope with so much more - extending to all kinds of other queues and config management - with simple management, off the shelf diagnostics etc.
While the object store is neat, there is a confluence of factors which make it great and simple for this workload, that may not extend to others. 200ms latency is a lot for other workloads, 5GB/s doesn't leave a lot of headroom, etc. And I don't want to be asked to diagnose transient issues with this.
So I'm torn. It's simple to deploy and configure from a fresh deployment PoV. Yet it wouldn't be accepted into any deployment I have worked on.
staticassertion|5 days ago
You can achieve stupidly fast read/write operations if you do this right with a system that is shocking simple to reason about.
> Step 4: queue.json with an HA brokered group commit > The broker is stateless, so it's easy and inexpensive to move. And if we end up with more than one broker at a time? That's fine: CAS ensures correctness even with two brokers.
TBH this is the part that I think is tricky. Just resolving this in a way that doesn't end up with tons of clients wasting time talking to a broker that buffers their writes, pushes them, then always fails. I solved this at one point with token fencing and then decided it wasn't worth it and I just use a single instance to manage all writes. I'd again point to DeltaLake for the "good" design here, which is to have multiple manifests and only serialize compaction, which also unlocks parallel writers.
The other hard part is data deletion. For the queue it looks deadly simple since it's one file, but if you want to ramp up your scale and get multiple writers or manage indexes (also in S3) then deletion becomes something you have to slip into compaction. Again, I had it at one point and backed it out because it was painful.
But I have 40k writes per second working just fine for my setup, so I'm not worrying. I'd suggest others basically punt as hard as possible on this. If you need more writes, start up a separate index with its own partition for its own separate set of data, or do naive sharding.
allknowingfrog|5 days ago
thomas_fa|5 days ago
[1] https://docs.cloud.google.com/storage/docs/samples/storage-m... [2] https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameOb... [3] https://fractalbits.com/blog/why-we-built-another-object-sto...
tomnicholas1|5 days ago
[1]: https://icechunk.io/en/latest/
zbentley|5 days ago
Could you expand on that? Even if it wasn't the approach you stuck with, I'm curious.
salil999|5 days ago
Similar idea but you have the power of S3 scale (if you really need it). For context, I do not work at WS. My company switched to it recently and we've seen great improvements over traditional Kafka.
soletta|5 days ago
jrjeksjd8d|5 days ago
This is not to say it's a bad system, but it's very precisely tailored for their needs. If you look at the original Kafka implementation, for instance, it was also very simple and targeted. As you bolt on more use cases and features you lose the simplicity to try and become all things to all people.
formerly_proven|5 days ago
snowhale|5 days ago
[deleted]
loevborg|5 days ago
I'd love to see a full sample implementation based on s3 + ecs - just to study how it works.
dewey|5 days ago
Sirupsen|5 days ago
We don't have a relational database, otherwise that would work great for a queue! You can imagine us continuing to iterate here to Step 5, Step 6, ... Step N over time. The tradeoff of each step is complexity, and complexity has to be deserved. This is working exceptionally well currently.
jamescun|5 days ago
What actually _needs_ to be in the database? I've never gone as far as building a job queue on top of object storage, but have been involved in building surprisingly consistent and reliable systems with object storage.
unknown|5 days ago
[deleted]
talentedtumor|5 days ago
staticassertion|5 days ago
Right. You can issue a write that will only be accepted if a condition is matched, like the etag of the object matching your expectation. If it doesn't match, your object was invalidated.
motoboi|5 days ago
soletta|5 days ago
ozgrakkurt|5 days ago
Just slap some garbage on it and it is better
isoprophlex|5 days ago
dewey|5 days ago
jitl|5 days ago
hinkley|5 days ago
Among other problems I knew the next thing we were going to have to do was autoscaling and the system we had for call and response was a mess from that respect. Unanswered questions were: How do you know when all agents have succeeded, how do you avoid overwriting your peers’ data, and what do you do with agents that existed yesterday and don’t today?
I ended up rewriting all of the state management data so that each field had one writer and one or more readers. It also allowed me to move the last live service call for another service and decommission it. Instead of having a admin service you just called one of the peers at random and elected it leader for the duration of that operation. I also arranged the data so the leader could watch the parent key for the roll call and avoid needing to poll.
Each time a task was created the leader would do a service discovery call to get a headcount and then wait for everyone to set a suggests or failure state. Some of these state transitions were idempotent, so if you reissued a task you didn’t need to delete the old results. Everyone who already completed it would noop, and the ones that failed or the new servers that joined the cluster would finish up. If there was a delete operation later then the data would be purged from the data set and the agents, a subsequent call would be considered new.
Long story short, your CS program should have distributed computing classes because this shit is hard to work out from first principles when you don’t know what the principles even are.
up2isomorphism|5 days ago
SignalStackDev|5 days ago
[deleted]
octoclaw|5 days ago
[deleted]
umairnadeem123|5 days ago
[deleted]
PunchyHamster|5 days ago
[deleted]
naillang|5 days ago
[deleted]
hinkley|5 days ago
For instance backing off 53 ms is a lot of latency. Backing off 1 μs may still result in a collision, requiring two or three backoffs before it resolves, where a longer pause on the first backoff might have resulted in quicker recovery.
Optimistic locking is a statistical model just like bloom filters are, and scaling something 20 or 100 times higher is a hell of a lot of runway. Especially if the failure mode is a person sharing someone else’s account or using a laptop and tablet at exactly the same time.
unknown|5 days ago
[deleted]
jstrong|5 days ago