Great post. This one always brings a smile to my face:
> Every component is crash-only
I was part of the team that developed a distributed, five-9's control system for an industry where downtime costs millions per minute and comes with a federal investigation if long enough. On top of that, the industry is made up of competitors that explicitly distrust each other, so all components had to be truly distributed, with no central coordination for anything.
Given the requirements we decided to explicitly adopt a crash-only approach. Between idempotent operations, horizontal scaling, and fast restart times, we could make failing components not impact SLAs (and we had testing to ensure it).
Once it gets out into the field (which because of how risk adverse this industry is, is measured in years), it turns out they really did not like software crashing. They interpreted crashing as bad quality, and no amount of "we do it on purpose to ensure correctness" was going to make them happy.
Rebrand it as "fault tolerant" and/or "adverse environment certified" and you should be good to go. That's how they do it in the military sector at least.
> The main advantage of distributed transactions is that they make distributed systems look less distributed by choosing CP, but that inherently trades off availability!
This is true, but I suspect that its slightly missing the important thing about transactions. A transaction is an operation that takes the database from one Consistent (ACID "C", you can think about it as "legal under the business logic") state to another Consistent state. Linearizability (CAP "C") isn't enough to do that, because often changes in databases require "take from Bob and give to Alice", or "check Bob and Alice's balance and add the order", neither of which fit well into Linearizability's single-key definition of the world. Allowing developers to think about a stream of operations that moves the databases from one legal state to another is super powerful. The whole point is that it provides an abstraction that hides concurrency (ACID "I") and partial failure (ACID "A"). Saving developers from reasoning about those is a big win!
> I should also note that while Distributed Transactions might be a useful tool in building idempotency, simply wrapping a non idempotent operation (e.g. “add 1 to X”) in a transaction does not make it idempotent.
The OP is right that this isn't a panacea, especially where those transactions aren't idempotent. But transactions are a mechanism to implement idempotence ("insert order number 10 if it isn't there already"), and idempotence and ACID "C" can be really hard to achieve without transactions (or at least "I" and "A").
Transactions, CRDTs, and the CALM theorem are linked too. You can definitely have transactions in systems that aren't CAP "C" consistent, and still have them do legal things. The CALM theorem lays out one way to think about those, and CRDTs are a kind of object-oriented embodiment of that theory.
Great points, transactions are certainly useful in helping developers think about state transitions. I think some of the ~snark might come from my personal struggles with trying to convey why wrapping non idempotent state transitions in "BEGIN TRANSACTION ... COMMIT" doesn't immediately make the system reliable. I completely agree transactions make understanding the state transitions easier and that is valuable.
I do think CRDTs or idempotency/fencing tokens are also a valuable way to reason about state transitions, and they can provide much lower latency in a global distributed system.
> Allowing developers to think about a stream of operations that moves the databases from one legal state to another is super powerful.
You're right—it's super powerful.
We used this "RSM intuition" to test TigerBeetle's strict serializability, by verifying state transitions the instant they happen, taking advantage of the test simulator's knowledge of inflight client requests, instead of trying to piece everything together and verify strict serializability after the fact.
I feel like the correct approach is accepting that determinacy is nonsensical in a world where time is relative and instead doubling down on nondeterministic (but predictable!) algorithms. This means leveraging concepts like commutativity and associativity to ensure predictability.
This was a great post and covers many day to day topics that practitioners tend to hand wave over, especially as distributed systems are becoming more pervasive. Dare I say even some of the statements are becoming cliches.
The section on distributed transactions could have a little more nuance. Particularly the example about the counter where I suspect any system offering transactions also has a CAS operation. Additionally the benefit of a transaction system is that you can offer bounded counters where as an AP or “strong” EC (CRDTs) system cannot.
Thanks I'm glad you liked it! Your point on distributed transactions is very true, using CAS is what I meant by "transactionally advance a summary".
For example, you could place a unique identifier on every count event and then roll
up those deltas in the background and transactionally advance a summary, either
preventing ingestion after some time delay or handling recounting.
Certainly transactions can help, but you still have to data model correctly for failure.
One disadvantage of people very familiar with distributed systems is that they might not try hard enough to avoid building unnecessarily distributed systems.
Beyond the pedantic distinction, is there any real point to not calling "at-least-once delivery with idempotent processing" exactly-once processing? I can't imagine that any external observers would be able to tell.
It conveys a false sense of correctness. Usually the system doing the processing has to use higher level or external methods of providing idempotency.
For example TCP implements "exactly once processing" by your definition but you probably still want Stripe to include idempotency keys in their charge API so you don't pay twice.
It’s a problem for those of us who build consumer APIs, because people without a deep distributed systems background quite reasonably expect that “exactly once” means their downstream application will receive each event exactly once. I’ve had multiple incredibly frustrating conversations where I had to find a diplomatic way to explain that yes, your connector needs to handle idempotency, and no, it’s not because there’s some “exactly once” technology our system is missing.
Am I missing why a distributed lock is an impossibility? The problem stated is that a partitioned node can't know it has lost the lock, but this is only an issue if there is a way to lose the lock short of returning it.
Which I guess is to say: what difference is there between a lease with an infinite timeout unless manually returned, and a "lock"?
Certainly the system deadlocks under partition but I'm not sure why that makes this "impossible".
> a lease with an infinite timeout unless manually returned
I would argue that "infinite timeout" is another negative shibboleth.
every operation in a distributed system has some duration after which you can be 99.9% confident (or 99.9999%, or whatever threshold you want to pick) that it was lost to the void and will never return a result.
in a robust distributed system, you want to pick a reasonable timeout value, and then take appropriate action in response to the timeout. typically this is retrying the operation, bubbling up a failure message to a higher level, or some combination of the two (retry a few times, fail if all the retries fail).
an infinite timeout represents a deliberate design choice of "I don't want to handle the case of this message or API call being lost in-transit and never returning either success or failure".
in my experience, infinite timeouts are often the cause of "hmm, this thing is up and running but seems 'stuck' and not making any progress, let me try manually restarting this service...OK, that seems to have recovered it" bugs and production alerts.
The short answer is because if the lock holder fails your other nodes have no way of knowing if the lock holder failed (consequence of FLP Impossibility result). If you set a timeout, then that’s a lease.
The long answer is to peel this onion for yourself and see where it leads. It’s a lot of fun.
At our job someone decided to use a ready-to-use Go library which used Redis for distributed locking. But I found that it was broken by design and completely unreliable, and we had random transient errors stemming from it. It worked OK 99.9% the time, but once in a while we were getting inconsistent state in our application. The description initially made sense and the usage looked simple. It worked by a node creating a value with a TTL, which was used to make the lock auto-expire if a node crashed. If a node found that a value under the same name was already found in Redis, it would block. Since access to Redis is serialized, all such actions were basically atomic. The problem was due to the auto-expire feature. The TTL can expire while your code under the lock is scheduled out due to GC or waiting for I/O. So the lock that you held could be released basically at any point of execution while you were supposedly under the lock. Extending the lock's TTL after every line of code isn't practical and probably prone to race conditions anyway (and the library IIRC didn't provide a way to do it). I read there's a technique called token fencing but it requires additional changes to the way your shared resources are accessed which isn't always possible. I still don't know how to do distributed locks right and there seem to be many broken implementations in the wild.
This article is one of the few I've ever read on distributed systems designs that is even remotely close to what I'd call "correct". The amount of disagreements I've had with colleagues when I propose crash-only modes of operation is unbelievable.
Follow the guidelines in this post and you'll indeed result in (more) robust systems. Great writeup.
And here are a few more positive phrases: "total order", "committed/uncommitted log", "replicated state machine", "logical timestamp", "the network is not homogeneous", "bandwidth is not infinite", "tail latency tolerance", "fault model", "message passing", "recovery", "cascading failure", "metastable", "strict serializability", and "stable storage"!
Surprisingly, at least to me, it's jarring to hear the phrase "CAP" because the field is far more nuanced. That "C" alone has so many different flavors! Far more interesting to talk about the FLP result.
Watch out also for "RPC" because that's not isomorphic with optimal consensus protocols, where messages follow multi-path routing (A->C->B->A). Granted, RAFT uses the term "RPC" (A->B->A) but that's more a concession to accessibility and not the broadest way to think of distributed systems, or necessarily the best way to implement them. Message passing is simpler and more realistic as it helps you to see and think about the network/process fault model more clearly.
Distributed testing techniques are also moving towards autonomous deterministic testing, as pioneered by FoundationDB, where the database itself is the test simulator—these tend to get into more interesting state spaces that can also be replayed instantly from a seed, compared to external test harnesses that run for hours in real time and that can't reproduce distributed bugs deterministically in exactly the same way every time.
> database vendors might try just a little harder to tell the truth...
Come on, you know that's not what's going to happen. If they notice at all, they'll just incorporate the magic phrases into their BS so you have to hunt harder for a real signal.
I'm a newbie and a little confused. On one hand there are posts like this that claim exactly-once delivery and distributed locks are impossible. But on the other hand, if I look at the docs of a distributed database, say Apache Ignite, they will say that they have exactly-once delivery [1] and distributed locks [2]. So ... which is it?
This is exactly the kind of stuff I wrote this post about. The first (exactly once) is actually just at-least-once with deduplication based on a counter. To reliably process the events, however, you need to make your downstream idempotent as well. Think of it like your event processor might fail, so even if you only receive the message "once" if you can fail processing it you still have to think about retries. In my opinion it would be explicitly better for the event system to provide "at least once" and intentionally duplicate events on occasion to test your processors ability to handle duplicates.
The second (lock) is actually a lease fwict, and writing code in the locked body that assumes it will be truly mutually exclusive is pretty dangerous (see the linked post from Martin [1] for why).
Software vendors lie. Even open source ones. In some cases they get REALLY close to the truth, but aren't forthcoming with that last 1% case where their statement becomes a lie.
If you want to see a ton of lies, go and start reading through all the Jepsen posts tearing down those dubious claims.
In the vein of the TFA, this would be a negative shibboleth (which just goes to show how being pedantic in certain contexts is just silly). The reason being that eventual consistency has no guarantee on what "eventual" means. If your replicas converge ten years from when a change is made, you can (correctly) claim to have eventual consistency.
It is notable that the linked article [1] is critical of those that don't understand these concepts, then goes on to misunderstand some of the concepts.
Idempotent means that repeating the same operation eventually stabilizes, not that Op^2 = Op. It combines retryability of failures with retryability of successes.
Alternatively, idempotency applies to successful operations, orthogonal from error cases.
Retryability doesn't help in the case of the (bad) operation "+1" which is not idempotent.
cpitman|3 years ago
> Every component is crash-only
I was part of the team that developed a distributed, five-9's control system for an industry where downtime costs millions per minute and comes with a federal investigation if long enough. On top of that, the industry is made up of competitors that explicitly distrust each other, so all components had to be truly distributed, with no central coordination for anything.
Given the requirements we decided to explicitly adopt a crash-only approach. Between idempotent operations, horizontal scaling, and fast restart times, we could make failing components not impact SLAs (and we had testing to ensure it).
Once it gets out into the field (which because of how risk adverse this industry is, is measured in years), it turns out they really did not like software crashing. They interpreted crashing as bad quality, and no amount of "we do it on purpose to ensure correctness" was going to make them happy.
WJW|3 years ago
__alexs|3 years ago
caffeine|3 years ago
The solution here is to rebrand it with some vague euphemism:
“Ah yes the component underwent a state calibration”
fh973|3 years ago
Jokes aside, there is even a body of research papers around this subject, if you need some backing.
quickthrower2|3 years ago
mjb|3 years ago
> The main advantage of distributed transactions is that they make distributed systems look less distributed by choosing CP, but that inherently trades off availability!
This is true, but I suspect that its slightly missing the important thing about transactions. A transaction is an operation that takes the database from one Consistent (ACID "C", you can think about it as "legal under the business logic") state to another Consistent state. Linearizability (CAP "C") isn't enough to do that, because often changes in databases require "take from Bob and give to Alice", or "check Bob and Alice's balance and add the order", neither of which fit well into Linearizability's single-key definition of the world. Allowing developers to think about a stream of operations that moves the databases from one legal state to another is super powerful. The whole point is that it provides an abstraction that hides concurrency (ACID "I") and partial failure (ACID "A"). Saving developers from reasoning about those is a big win!
> I should also note that while Distributed Transactions might be a useful tool in building idempotency, simply wrapping a non idempotent operation (e.g. “add 1 to X”) in a transaction does not make it idempotent.
The OP is right that this isn't a panacea, especially where those transactions aren't idempotent. But transactions are a mechanism to implement idempotence ("insert order number 10 if it isn't there already"), and idempotence and ACID "C" can be really hard to achieve without transactions (or at least "I" and "A").
Transactions, CRDTs, and the CALM theorem are linked too. You can definitely have transactions in systems that aren't CAP "C" consistent, and still have them do legal things. The CALM theorem lays out one way to think about those, and CRDTs are a kind of object-oriented embodiment of that theory.
jolynch|3 years ago
I do think CRDTs or idempotency/fencing tokens are also a valuable way to reason about state transitions, and they can provide much lower latency in a global distributed system.
_vvhw|3 years ago
You're right—it's super powerful.
We used this "RSM intuition" to test TigerBeetle's strict serializability, by verifying state transitions the instant they happen, taking advantage of the test simulator's knowledge of inflight client requests, instead of trying to piece everything together and verify strict serializability after the fact.
Here's TB's state checker in 49 lines of Zig:
https://github.com/coilhq/tigerbeetle/blob/477d6df366e2c10fa...
User23|3 years ago
rubiquity|3 years ago
The section on distributed transactions could have a little more nuance. Particularly the example about the counter where I suspect any system offering transactions also has a CAS operation. Additionally the benefit of a transaction system is that you can offer bounded counters where as an AP or “strong” EC (CRDTs) system cannot.
jolynch|3 years ago
fulafel|3 years ago
polio|3 years ago
jolynch|3 years ago
For example TCP implements "exactly once processing" by your definition but you probably still want Stripe to include idempotency keys in their charge API so you don't pay twice.
SpicyLemonZest|3 years ago
unknown|3 years ago
[deleted]
joebob42|3 years ago
Which I guess is to say: what difference is there between a lease with an infinite timeout unless manually returned, and a "lock"?
Certainly the system deadlocks under partition but I'm not sure why that makes this "impossible".
evil-olive|3 years ago
I would argue that "infinite timeout" is another negative shibboleth.
every operation in a distributed system has some duration after which you can be 99.9% confident (or 99.9999%, or whatever threshold you want to pick) that it was lost to the void and will never return a result.
in a robust distributed system, you want to pick a reasonable timeout value, and then take appropriate action in response to the timeout. typically this is retrying the operation, bubbling up a failure message to a higher level, or some combination of the two (retry a few times, fail if all the retries fail).
an infinite timeout represents a deliberate design choice of "I don't want to handle the case of this message or API call being lost in-transit and never returning either success or failure".
in my experience, infinite timeouts are often the cause of "hmm, this thing is up and running but seems 'stuck' and not making any progress, let me try manually restarting this service...OK, that seems to have recovered it" bugs and production alerts.
rubiquity|3 years ago
The long answer is to peel this onion for yourself and see where it leads. It’s a lot of fun.
kgeist|3 years ago
qlm|3 years ago
convolvatron|3 years ago
junon|3 years ago
Follow the guidelines in this post and you'll indeed result in (more) robust systems. Great writeup.
dochtman|3 years ago
_vvhw|3 years ago
And here are a few more positive phrases: "total order", "committed/uncommitted log", "replicated state machine", "logical timestamp", "the network is not homogeneous", "bandwidth is not infinite", "tail latency tolerance", "fault model", "message passing", "recovery", "cascading failure", "metastable", "strict serializability", and "stable storage"!
Surprisingly, at least to me, it's jarring to hear the phrase "CAP" because the field is far more nuanced. That "C" alone has so many different flavors! Far more interesting to talk about the FLP result.
Watch out also for "RPC" because that's not isomorphic with optimal consensus protocols, where messages follow multi-path routing (A->C->B->A). Granted, RAFT uses the term "RPC" (A->B->A) but that's more a concession to accessibility and not the broadest way to think of distributed systems, or necessarily the best way to implement them. Message passing is simpler and more realistic as it helps you to see and think about the network/process fault model more clearly.
Distributed testing techniques are also moving towards autonomous deterministic testing, as pioneered by FoundationDB, where the database itself is the test simulator—these tend to get into more interesting state spaces that can also be replayed instantly from a seed, compared to external test harnesses that run for hours in real time and that can't reproduce distributed bugs deterministically in exactly the same way every time.
andrewflnr|3 years ago
Come on, you know that's not what's going to happen. If they notice at all, they'll just incorporate the magic phrases into their BS so you have to hunt harder for a real signal.
jolynch|3 years ago
lopatin|3 years ago
[1] https://ignite.apache.org/docs/latest/key-value-api/continuo...
[2] https://ignite.apache.org/docs/latest/distributed-locks
jolynch|3 years ago
The second (lock) is actually a lease fwict, and writing code in the locked body that assumes it will be truly mutually exclusive is pretty dangerous (see the linked post from Martin [1] for why).
[1] https://martin.kleppmann.com/2016/02/08/how-to-do-distribute...
nosequel|3 years ago
If you want to see a ton of lies, go and start reading through all the Jepsen posts tearing down those dubious claims.
http://jepsen.io/analyses
Ozzie_osman|3 years ago
A few more positive shiboleths.
One would be eventual consistency.
Another would be discussing write paths vs read paths (or patterns) and recognizing that those can be decoupled (or a mention of CQRS).
sombremesa|3 years ago
In the vein of the TFA, this would be a negative shibboleth (which just goes to show how being pedantic in certain contexts is just silly). The reason being that eventual consistency has no guarantee on what "eventual" means. If your replicas converge ten years from when a change is made, you can (correctly) claim to have eventual consistency.
bmcniel|3 years ago
[1] https://codahale.com/you-cant-sacrifice-partition-tolerance/
fortran77|3 years ago
https://www.sefaria.org/Judges.12.6?ven=Tanakh:_The_Holy_Scr...
furstenheim|3 years ago
I prefer the term retryable to idempotent. If there's a failure in the first call, to be truly idempotent it should fail on the second.
Retryable on the other hand is easier to argue about. Important thing is not the response but the end state of the system.
lupire|3 years ago
Alternatively, idempotency applies to successful operations, orthogonal from error cases.
Retryability doesn't help in the case of the (bad) operation "+1" which is not idempotent.
unknown|3 years ago
[deleted]
unknown|3 years ago
[deleted]
unknown|3 years ago
[deleted]