In distributed systems, there’s a common understanding that
it is not possible to guarantee exactly-once delivery of
messages.
This is not only a common understanding, it is a provably correct axiom. For a detailed discussion regarding the concepts involved, see the "two general's problem"[0].
To guarantee exactly once processing requires a Single Point of Truth (SPoT) enforcing uniqueness shared by all consumers, such as a transactional persistent store. Any independently derived or generated "idempotency keys" cannot provide the same guarantee.
The author goes on to discuss using the PostgreSQL transaction log to create "idempotency keys", which is a specialization of the aforementioned SPoT approach. A more performant variation of this approach is the "hi/low" algorithm[1], which can reduce SPoT allocation of a unique "hi value" to 1 in 2,147,483,648 times when both are 32-bit signed integers having only positive values.
Still and all, none of the above establishes logical message uniqueness. This is a trait of the problem domain, in that whether two or more messages having the same content are considered distinct (thus mandating different "idempotentcy keys") or duplicates (thus mandating identical "idempotency keys").
Pedantically, axioms by definition are assumed/defined without proof and not provable; if it is provable from axioms/definitions, it is a theorem, not an axiom.
> A more performant variation of this approach is the "hi/low" algorithm
I am discussing this approach, just not under that name:
> Gaps in the sequence are fine, hence it is possible to increment the persistent state of the sequence or counter in larger steps, and dispense the actual values from an in-memory copy.
In that model, a database sequence (e.g. fetched in 100 increments) represents the hi value, and local increments to the fetched sequence value are the low value.
However, unlike the log-based approach, this does not ensure monotonicity across multiple concurrent requests.
I like to use uuid5 for this. It produces unique keys in a given namespace (defined by a uuid) but also takes an input key and produces identical output ID for the same input key.
This has a number of nice properties:
1. You don’t need to store keys in any special way. Just make them a unique column of your db and the db will detect duplicates for you (and you can provide logic to handle as required, eg ignoring if other input fields are the same, raising an error if a message has the same idempotent key but different fields).
2. You can reliably generate new downstream keys from an incoming key without the need for coordination between consumers, getting an identical output key for a given input key regardless of consumer.
3. In the event of a replayed message it’s fine to republish downstream events because the system is now deterministic for a given input, so you’ll get identical output (including generated messages) for identical input, and generating duplicate outputs is not an issue because this will be detected and ignored by downstream consumers.
4. This parallelises well because consumers are deterministic and don’t require any coordination except by db transaction.
This was my exact solution in the late 1990's that I formulated using a uid algorithm I created when confronted with a growing payment processing load issue that centralized hardware at the time could not handle. MsSQL could not process the ever increasing load yet the firehose of real-time payments transaction volume could not be turned off so an interim parallel solution involving microservices to walk everything over to Oracle was devised using this technique. Everything old is new again as the patterns and cycles ebb and flow.
This article glosses over the hardest bit and bike sheds too much over keys.
> Critically, these two things must happen atomically, typically by wrapping them in a database transaction. Either the message gets processed and its idempotency key gets persisted. Or, the transaction gets rolled back and no changes are applied at all.
How do you do that when the processing isn’t persisted to the same database? IE. what if the side effect is outside the transaction?
You can’t atomically rollback the transaction and external side effects.
If you could use a distributed database transaction already, then you don’t need idempotent keys at all. The transaction itself is the guarantee
The external side-effects also need to support idempotency keys, which you propagate. Then you use something like a message queue to drive the process to completion.
i get what you are saying, but i don't think it's fair to call it bike shedding, getting the keys right is also important, one can easily screw up that part too
I'm not sure if TFA implies this (it uses too much of his personal jargon for me to understand everything, and it's Friday) but consider this solution based on his transaction log section: you should use the same database that persists the idempotency key to persist the message, and then consume the messages from the CDC/outbox-style. Meaning, the database simply acts as an intermediate machine that dedupes the flow of messages. Assuming you're allowed to make the producer wait.
The practical answer is you use a combination of queries and compensating actions to resemble idempotency with the external service. Some people additionally constrain things to be a linear sequence of actions/effects, and call this pattern Sagas. It's sort of a bastardized distributed transaction that lets you handle a lot of real world use cases without getting into the complexity of true distributed transactions.
If you need "transactions" with microservices the traditional answer is sagas - eg multiple transaction boundaries that dont fully commit until their entire set of things is complete by passing a message or event to the next system, and having the ability to rollback each thing in a positive manner, either by appending new correct state again or "not ever adding" the original state.
Failure resistant systems end up having a bespoke implementation of a project management workflow built into them and then treating each task like a project to be managed from start to finish, with milestones along the way.
another POV is that solutions that require no long term "durable workflow" style storage provide exponentially more value. if you are making something that requires durable workflows, you ought to spend a little bit of time in product development so that it does not require durable workflows, instead of a ton of time making something that isn't very useful durable.
for example, you can conceive of a software vendor that does the end-to-end of a real estate transaction: escrow, banking, signature, etc. The IT required to support the model of such a thing would be staggering. Does it make sense to do that kind of product development? That is inventing all of SAP, on top of solving your actual problem. Or making the mistake of adopting temporal, trigger, etc., who think they have a smaller problem than making all of SAP and spend considerable resources convincing you that they do.
The status quo is that everyone focuses on their little part to do it as quickly as possible. The need for durable workflows is BAD. You should look at that problem as, make buying and selling homes much faster and simpler, or even change the order of things so that less durability is required; not re-enact the status quo as an IT driven workflow.
> To ensure monotonicity, retrieval of the idempotency key and emitting a message with that key must happen atomically, uninterrupted by other worker threads. Otherwise, you may end up in a situation where thread A fetches sequence value 100, thread B fetches sequence value 101, B emits a message with idempotency key 101, and then A emits a message with idempotency key 100\. A consumer would then, incorrectly, discard A’s message as a duplicate.
Also check out Lamport vector clocks. It solves this problem if your producers are a small fixed number.
Huh. Interesting solution! I've always thought the only way to make an API idempotent was to not expose "adding" endpoints. That is, instead of exposing a endpoint "addvalue(n)" you would have setvalue(n)". Any adding that might be needed is then left as an exercise for the client.
These strategies only really work for stream processing. You also want idempotent APIs which won't really work with these. You'd probably go for the strategy they pass over which is having it be an arbitrary string key and just writing it down with some TTL.
Here's what I don't understand about distributed systems: TCP works amazing, so why not use the same ideas? Every message increments a counter, so the receiver can tell the ordering and whether some message is missing. Why is this complicated?
> The more messages you need to process overall, the more attractive a solution centered around monotonically increasing sequences becomes, as it allows for space-efficient duplicate detection and exclusion, no matter how many messages you have.
It should be the opposite: with more messages you want to scale with independent consumers, and a monotonic counter is a disaster for that.
You also don’t need to worry about dropping old messages if you implement your processing to respect the commutative property.
You only need monotonicity per producer here, and even with independent producer and consumer scaling you can make tracking that tractable as long as you can avoid every consumer needing to know about every producer while also having a truly huge cardinality of producers.
What I would like to learn is how to implement arbitrary client-chosen idempotency keys for public HTTP APIs to avoid duplicate requests. Stripe does this, for example; but other than keeping a record of every single request ever received, I don’t see an elegant solution…
"Idempotency key" is a widely accepted term [1] for this concept; arguably, you could call it "Deduplication key" instead, but I think this ship has sailed.
I agree, this whole thread seems to turn the concept of idempotency on its head. As far as I know, an idempotent operation is one that can be repeated without ill-effect rather than the opposite which is a process that will cause errors if executed repeatedly.
The article doesn't propose anything especially different from Lamport clocks. What this article suggests is a way to deal with non-idempotent message handlers.
What surprised me the most is that the counterparty field is optional.
You'd think that a transaction means money is going from a source to a destination, but according to some banking APIs sometimes it just magically disappears into the aether.
[+] [-] AdieuToLogic|3 months ago|reply
To guarantee exactly once processing requires a Single Point of Truth (SPoT) enforcing uniqueness shared by all consumers, such as a transactional persistent store. Any independently derived or generated "idempotency keys" cannot provide the same guarantee.
The author goes on to discuss using the PostgreSQL transaction log to create "idempotency keys", which is a specialization of the aforementioned SPoT approach. A more performant variation of this approach is the "hi/low" algorithm[1], which can reduce SPoT allocation of a unique "hi value" to 1 in 2,147,483,648 times when both are 32-bit signed integers having only positive values.
Still and all, none of the above establishes logical message uniqueness. This is a trait of the problem domain, in that whether two or more messages having the same content are considered distinct (thus mandating different "idempotentcy keys") or duplicates (thus mandating identical "idempotency keys").
0 - https://en.wikipedia.org/wiki/Two_Generals'_Problem
1 - https://en.wikipedia.org/wiki/Hi/Lo_algorithm
[+] [-] dragonwriter|3 months ago|reply
Pedantically, axioms by definition are assumed/defined without proof and not provable; if it is provable from axioms/definitions, it is a theorem, not an axiom.
[+] [-] gunnarmorling|3 months ago|reply
I am discussing this approach, just not under that name:
> Gaps in the sequence are fine, hence it is possible to increment the persistent state of the sequence or counter in larger steps, and dispense the actual values from an in-memory copy.
In that model, a database sequence (e.g. fetched in 100 increments) represents the hi value, and local increments to the fetched sequence value are the low value.
However, unlike the log-based approach, this does not ensure monotonicity across multiple concurrent requests.
[+] [-] threatofrain|3 months ago|reply
[+] [-] imron|3 months ago|reply
This has a number of nice properties:
1. You don’t need to store keys in any special way. Just make them a unique column of your db and the db will detect duplicates for you (and you can provide logic to handle as required, eg ignoring if other input fields are the same, raising an error if a message has the same idempotent key but different fields).
2. You can reliably generate new downstream keys from an incoming key without the need for coordination between consumers, getting an identical output key for a given input key regardless of consumer.
3. In the event of a replayed message it’s fine to republish downstream events because the system is now deterministic for a given input, so you’ll get identical output (including generated messages) for identical input, and generating duplicate outputs is not an issue because this will be detected and ignored by downstream consumers.
4. This parallelises well because consumers are deterministic and don’t require any coordination except by db transaction.
[+] [-] cortesoft|3 months ago|reply
Edit: Just looked it up... looks like this is basically what a uuid5 is, just a hash(salt+string)
[+] [-] bknight1983|3 months ago|reply
[+] [-] bokohut|3 months ago|reply
[+] [-] pyrolistical|3 months ago|reply
> Critically, these two things must happen atomically, typically by wrapping them in a database transaction. Either the message gets processed and its idempotency key gets persisted. Or, the transaction gets rolled back and no changes are applied at all.
How do you do that when the processing isn’t persisted to the same database? IE. what if the side effect is outside the transaction?
You can’t atomically rollback the transaction and external side effects.
If you could use a distributed database transaction already, then you don’t need idempotent keys at all. The transaction itself is the guarantee
[+] [-] hippo22|3 months ago|reply
[+] [-] ivanbalepin|3 months ago|reply
[+] [-] roncesvalles|3 months ago|reply
[+] [-] jasonwatkinspdx|3 months ago|reply
[+] [-] hobs|3 months ago|reply
[+] [-] hinkley|3 months ago|reply
[+] [-] doctorpangloss|3 months ago|reply
for example, you can conceive of a software vendor that does the end-to-end of a real estate transaction: escrow, banking, signature, etc. The IT required to support the model of such a thing would be staggering. Does it make sense to do that kind of product development? That is inventing all of SAP, on top of solving your actual problem. Or making the mistake of adopting temporal, trigger, etc., who think they have a smaller problem than making all of SAP and spend considerable resources convincing you that they do.
The status quo is that everyone focuses on their little part to do it as quickly as possible. The need for durable workflows is BAD. You should look at that problem as, make buying and selling homes much faster and simpler, or even change the order of things so that less durability is required; not re-enact the status quo as an IT driven workflow.
[+] [-] Groxx|3 months ago|reply
[+] [-] Lethalman|3 months ago|reply
Also check out Lamport vector clocks. It solves this problem if your producers are a small fixed number.
[+] [-] zmj|3 months ago|reply
[+] [-] amarant|3 months ago|reply
Which obviously has it's own set of tradeoffs.
[+] [-] otterley|3 months ago|reply
[+] [-] eximius|3 months ago|reply
[+] [-] ekjhgkejhgk|3 months ago|reply
[+] [-] ewidar|3 months ago|reply
[+] [-] exitb|3 months ago|reply
[+] [-] Etheryte|3 months ago|reply
[+] [-] manoDev|3 months ago|reply
It should be the opposite: with more messages you want to scale with independent consumers, and a monotonic counter is a disaster for that.
You also don’t need to worry about dropping old messages if you implement your processing to respect the commutative property.
[+] [-] majormajor|3 months ago|reply
[+] [-] itishappy|3 months ago|reply
Is there any method for uniqueness testing that works after fan-out?
> You also don’t need to worry about dropping old messages if you implement your processing to respect the commutative property.
Commutative property protects if messages are received out of order. Duplicates require idempotency.
[+] [-] 9dev|3 months ago|reply
[+] [-] unknown|3 months ago|reply
[deleted]
[+] [-] amelius|3 months ago|reply
[+] [-] ranger_danger|3 months ago|reply
[+] [-] attila-lendvai|3 months ago|reply
idempotency means something else to me.
[+] [-] gunnarmorling|3 months ago|reply
[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
[+] [-] d4rkn0d3z|3 months ago|reply
The article doesn't propose anything especially different from Lamport clocks. What this article suggests is a way to deal with non-idempotent message handlers.
[+] [-] jackfranklyn|3 months ago|reply
[deleted]
[+] [-] crote|3 months ago|reply
You'd think that a transaction means money is going from a source to a destination, but according to some banking APIs sometimes it just magically disappears into the aether.