My company (among other things) routes print-on-demand orders to various print companies. Some of their APIs have mechanisms to ensure idempotency, some don't. The last time I pressed the issue, I was asked - and I quote - "Can't you just send the order only once?"
The thing is, having a print company that gets the printing part right is more important than having one that gets the API right. I use them anyway, and accept the risk that there will very occasionally be duplicate orders. At least in my business, it's just tshirts.
A few months ago I bought a fairly expensive cordless vacuum from hoover.com. I was charged once, but two of them arrived. I suspect I know why.
I have these sorts of facepalmable discussions from time to time. The last time was with a company that provides an invoicing service through an API. They are (and I am not shitting you) unable to ensure unique invoice numbers and told me to "just don't try to open and issue more than one invoice at any time".
Next time you fly, ask the person processing the check in how frequently the following happens: Somebody arrives with a ticket reservation confirmed, money charged to credit card, but no actual Eticket at Sabre can be found that will allow them to fly.
It's something I have been tracking over the years, rare but happens. Again some non transactional, non idempotent integrations setup out there...
> There are essentially three types of delivery semantics: at-most-once, at-least-once, and exactly-once.
Oh, there's a fourth kind: "none-of-the-above", i.e., neither at-most-once or at-least-once. The message gets delivered between [0, ∞] times. Maybe it gets delivered … maybe not. Your message is like UDP packet.
A surprising number of systems exhibit this behavior, sadly.
Which the author admits three quarters of the way through:
> The way we achieve exactly-once delivery in practice is by faking it. Either the messages themselves should be idempotent, meaning they can be applied more than once without adverse effects, or we remove the need for idempotency through deduplication.
Honestly I don't get why this is "faking it" though. It seems like the author's definition of "exactly once" is so purist as to essentially be a strawman. This is "exactly once" in practice.
Like are there other people claiming that this purist version of exactly-once does exist?
I think we need to keep the concepts separate because otherwise people get confused. You can not receive a message exactly once. Yes, it's not that hard, if you know this is an issue, to build a system where receiving the same message more than once won't cause a bad thing to happen. There's a few principled ways to do this, and some less principled ways that will still mostly work.
But that's not because you built a system that successfully delivers messages exactly once... you build a system that successfully processes messages exactly once, even if delivery occurs multiple times. The delivery still occurred multiple times. Even if your processing layer handled it, that may have other consequences worth understanding. Wrapping that up in a library may present a nice API for some programmer, but it doesn't solve the Byzantine General problem.
Whenever someone insists they can build Exactly Once with [mumble mumble mumble great tech here] I guarantee you there's a non-empty set of human readers coming away with the idea they can successfully create systems based on exactly-once delivery. After all, I built some code based on exactly-once delivery last night and it's working fine on my home ethernet even after I push billions of messages through it.
We're really better of pushing "There is no such thing as Exactly Once, and the way you deal with is [idempotence/id tracking/whatever]", not "Yes there is such a thing as Exactly Once delivery (see fine print about how I'm redefining this term)". The former produces more accurate models in human brains about what is going on and is more likely to be understood as a set of engineering tradeoffs. The latter seems to produce a lot of confusion and people not understanding that their "Exactly Once" solution isn't a magic total solution to the problem, but is in fact a particular point on the engineering tradeoff spectrum. In particular, the "exactly once" solutions can be the wrong choice for certain problems, like multiplayer game state updates, where it may be a lot more viable to think 1-or-0 and some timestamping and the ability to miss messages entirely and recover, rather than building an "exactly once" system.
AFAIK the point of exactly once delivery, in the context of message passing, is to abstract delivery concerns away from the application layer and into the messaging layer, so that the application can depend on the exactly-once semantics without having to write logic for it.
The problem with this is similar to the problems with two-phase commit in distributed databases: there are unavoidable failure cases. Most of the time it works just fine, but if you write your application to depend on this impossible feature, and it fails - which, given enough time, will certainly happen - then the cleaning up the mess can be much more effort (and have much wider business implications) than simply dealing with the undesirable behaviour of reality in the first place.
Or to put it another way: exactly once semantics can never be reliably extracted away from the application, so if you need it, it needs to be part of your application.
Theoretically true, and easy to say. But the hard part is actually implementing this in the context of business problems. What if you need to call external services that you don't control, and they don't provide idempotence? Like sending emails. Or worse: you send a message to a warehouse to deliver an item, and they deliver duplicates...
> We must choose between the lesser of two evils, which is at-least-once delivery in most cases. This can be used to simulate exactly-once semantics by ensuring idempotency or otherwise eliminating side effects from operations.
There is a third option besides idempotency and eliminating side-effects: give each message a unique ID, use that to keep a record of which messages have been processed, and don't process the same message twice.
If you want to reason about a world that has random software and hardware failures, than you cannot really have any kind of pure results. A backhoe could cut your network cable at exactly the wrong point, or a malfunctioning network switch could decide to insert the right extra few bytes in exactly the wrong place, changing the meaning of your message without altering any of the checksums. As the scale of your application increases, the chance of this sort of chaos increases as well. The question then becomes how to reason in the face of chaos, what sorts of error rates are acceptable, and how to build systems that can recover from supposedly impossible states. If your bank's software makes an error, they have established processes to determine that and correct the balance of accounts.
Maybe I didn't get the point. Of course we can't have exactly-once delivery directly in the layer of an unreliable network - but it seems pretty easy to construct it on a higher layer if your network stack supports at-least-once delivery: Just assign each message a unique ID during sending, then track those IDs in the receiver side and discard duplicates. And you need those IDs anyway so you can generate ACKs.
Isn't this basically what every "reliable" transport (TCP, HTTP3, message queues...) does?
Wherever you construct it you must necessarily have a machine whose failure mode is that "exactly once" degrades into either "at most once" or "at least once".
What determines which failure mode you get is whether the machine will failover to a machine that retries uncertain messages (giving you "at least once"), or it doesn't (giving you "at most once").
But, you say, why can't we have it failover to a machine that asks recipients what they have got and goes from there? Well we can, but the recipients don't know what messages are in the network still on their way to them.
But, you say, why not have the recipients disregard those inbound messages once they know about the replacement machine? Well you can do that, but now the *recipients* become machines whose job is to ensure the deduplication. And now *they* become the machine with a bad failure mode.
But, you say, does this not reduce the odds of failures? Why yes, it does. Which is why people do things like this. And there has to come a point where we accept SOME failure rate.
Now you need a database. Do you also need exactly-once delivery to the database? Now the service is no longer stateless too, which means scalability is a problem. Maybe you decide to make it just an in-process cache for de-duping, but that needs expiring and now the semantics are exactly-once within a given time period, and not across service restarts.
We can definitely solve this with higher level constructs, but they're not free, and they can introduce the same issues themselves.
> Isn't this basically what every "reliable" transport (TCP, HTTP3, message queues...) does?
TCP does this, to solve retries at the TCP layer. HTTP3 does this to solve issues at the HTTP3 layer. Message queues might solve this for the message queue, depends. But none of these solve the product level, or the user experience level, or other higher levels where these issues still crop up. They're issues you have to solve at every layer in some way.
So when does the receiver record the IDs? When it receives the message but before processing or after it's processed the message? If the former, then what if it goes down during processing? Then the other receivers will keep rejecting the message even though it's never processed. So now it's less than once. If the receiver records it after it's done processing, then it could go down after processing but before recording it in the DB. So now you have more than once.
Also, isn't the assumption here that you will have a reliable connection to a shared DB?
You can have engineered solutions that is pragmatically close to deliver exactly once but it's not "pure" -- there are still scenarios, however unlikely, that it will fail.
The context is exactly once in a distributed system. When you construct that higher layer you will make your system highly coordinated, thus no longer a distributed system.
This issues are in the context of distributed systems where you want to be able to recover from losing a receiver (f.ex., we want to be able to reassign partitions for a Kafka topic when a consumer goes down). If you don't mind your system grinding to a halt whenever you lose one of your receivers (that's perfectly fine in some circumstances!), then your proposed solution works great.
Edit: also, i should be fair and acknowledge that you're effectively describing idempotency (i'm guessing you already knew that ;P ), which the article's author eventually points out is a way to recover "exactly-once" semantics. The point, maybe, is that someone needs to explicitly do this somewhere; you can't really rely on your protocol to do it for you.
Yes, but the receiver can be faulty. If it acknowledges the message and then crashes before handling it, you've got at-most-once, and if it handles the message and then crashes before acknowledging it, you've got at-least-once. You can avoid this if the receiver handles and acknowledges in a single transaction, but I only know of one platform that implements this and everyone hates it (hence the throwaway).
What if the receiver process fails? How do you know which messages it processed successfully? You can shuffle the problem around indefinitely but it doesn’t go away.
If your processing doesn’t have any external side effects (make external API calls, send emails, charge credit cards, etc) then one option is to put your message queue in a relational DB alongside all your other data. Then you can pull a message, process it, write the results to the DB, and mark the message as processed all inside one big transaction. But not many use-cases can fit these constraints and it also has low throughout.
> FLP and the Two Generals Problem are not design complexities, they are impossibility results.
These are impossibility results given various assumptions and requirements that may not hold in practice, or may be too restrictive. For one thing, i suppose we're pretty happy with a probabilistic solution as long as we can get the probability below an acceptable threshold.
Ok so maybe the letter gets sent more than once but the message gets sent exactly once, because the messages are numbered and you only process each number once.
If you get a letter with the same number you already read you don't even open it.
In Kafka this is also handled this way, events are numbered, and you request "latest" from the last one you processed.
In our eventstreaming it's also done like this, it may surprise you that Kafka is just an implementation of eventstreaming and not the same as.
With Kafka the offset of the consumers, or until which number it had already processed, used to be handled by the ZooKeeper but is migrated to the consumers.
There is on "exactly one consumer gets the message" done by the ZooKeeper, all consumers get all the messages from the topics they subscribed to. If you want exactly on exactly one consumer you should create different topics.
Why not mention the architecture that comes closest to exactly-once delivery? If you store a Kafka offset along with your application state in a transactional datastore, then for all intents and purposes you have exactly-once delivery semantics. This is something I really like about Kafka’s design.
This is again assuming that you have no side effects. Imagine that you want to email users based on a list in Kafka. You read the offset in a transaction and update it. But do you send the email inside the transaction or after closing it? You are back to picking between at-least-once and at-most-once.
I suspect it's not mentioned because in the real world there are ways to work around this limitation. And because those work arounds actually work, there's nothing interesting to say. The post is much ado about nothing.
I decided a long time ago in 3+Mail that we could occasionally have messages delivered twice, or not at all, but there was no easy way to be sure neither ever happened. So you bias it to "twice."
I was completely naive to distributed systems until I was field promoted to owning one after tons of attrition with no backfilling roles.
It was a system built by people that also didn't have distributed system experiences. It was not enjoyable at all, and at least once delivery was a consistent headache that required infrequent but time consuming remediation.
I have been toying with the idea lately of using a transactional database (like SQL) to manage some of the very important queues.
Using a transaction to retrieve an item from the queue, and locking the row using "SELECT FOR UPDATE" and "SKIP LOCKED". Such that the row gets locked on read, and several workers can read from the table at the same time. Within the same transaction, other work is done, and everything gets committed to the database as a single atomic operation.
CockroachDB (a consensus/raft distributed database) recently added supported for SKIP LOCKED, but I still have yet to work on this idea.
I have worked on a system that took exaclty this approach for ~17 years. The database was Oracle, at the time we started 'SKIP LOCKED' was not even a documented feature of the Oracle DBMS. It is now. The approach worked quite well for us and happily working today at several large banks. Also, Oracle sells what I think they call AMQ (Advanced Message Queing) that provides a messaging API but uses the DBMS for storage. No idea how it performs relative to dedicated persistent messaging solutions, but I would guess that it probably good enough for many workloads.
> Therefore consumer applications will need to perform deduplication or handle incoming messages in an idempotent manner.
> The way we achieve exactly-once delivery in practice is by faking it. Either the messages themselves should be idempotent, meaning they can be applied more than once without adverse effects, or we remove the need for idempotency through deduplication.
Doesn't have to be a "true" exactly-once. Just practically so. It's like saying "humans can't actually fly... they are faking it by using machines".
This article reads like someone who has a very superficial understanding of the theory he/she claims proves his/her point. For starters, the two generals problem does not prevent one party knowing a message they sent previously was delivered exactly once. It just prevents both parties knowing about some common knowledge in the presence of message loss. Not that I am claiming to be any better!
Of course you can have exactly-once delivery. I mean... we know how to construct software that will transfer money from one account to another, exactly once. It really isn't rocket science but it does require a little bit of understanding of various tradeoffs that you are making.
It is a bit like saying that we can't have straight lines. Of course, if you zoom in far enough to see individual atoms, every physical surface will look jagged. But in practical terms we can have straight lines and surfaces to a good enough approximation. It means specifying what "straight" means and figuring out how to measure it and how to produce "straight" according to specification and measurements.
Engineering is about knowing and making tradeoffs. Every device we have ever created has to contend with limitations of physical existence. Engineering is about accomplishing goals in presence of those limitations.
A person who says "you can't deliver a message exactly once" clearly lives in an idealised, theoretical world. I would urge to leave your ivory tower for a second and see how engineers in real world accomplish what you say is not possible.
I get that this knowledge is useful -- but don't publish it as gospel. "You cannot have exactly-once delivery" is true, but not the same kind of truth as "you can't travel faster than light". No engineering can get you to travel faster than light. But engineering can get you as close to exactly-once delivery as you want to the point where the original statement stops being meaningful for real life problems.
I don't understand where the anger comes from, the article makes it clear it's talking about distributed systems theory.
Like someone else said, you can use at least once delivery and handle duplicate messages, but that's not quite the same as a distributed system guaranteeing that a message will be delivered exactly once.
> I mean... we know how to construct software that will transfer money from one account to another, exactly once.
Assuming you are talking about transferring between institutions, there is actually no single piece of software with this responsibility. The business processes are effectively what provide these guarantees (typically by way of another 3rd party).
In order to accomplish this, added latency (settlement time) is necessarily introduced into the process.
Have you actually read the article? The title is just a summary and the author fully acknowledges that their argument is essentially based around edge cases, not that this in any way diminishes it for me.
It’s just an interesting piece of theorising.
I think your comment (particularly your 4th paragraph about ivory towers etc) comes across as overly harsh and a little aggressive.
You can get exactly-once in a system if you design for consistency (in the CAP sense) and use a consensus protocol. those systems don't offer availability (in the CAP sense) by definition. and I guess when people say exactly-once is impossible they're speaking about systems that offer availability.
[+] [-] stickfigure|3 years ago|reply
The thing is, having a print company that gets the printing part right is more important than having one that gets the API right. I use them anyway, and accept the risk that there will very occasionally be duplicate orders. At least in my business, it's just tshirts.
A few months ago I bought a fairly expensive cordless vacuum from hoover.com. I was charged once, but two of them arrived. I suspect I know why.
[+] [-] jwr|3 years ago|reply
[+] [-] thaumasiotes|3 years ago|reply
What did they suggest you should do if you sent the order once and it didn't arrive?
[+] [-] belter|3 years ago|reply
It's something I have been tracking over the years, rare but happens. Again some non transactional, non idempotent integrations setup out there...
[+] [-] liftm|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] deathanatos|3 years ago|reply
Oh, there's a fourth kind: "none-of-the-above", i.e., neither at-most-once or at-least-once. The message gets delivered between [0, ∞] times. Maybe it gets delivered … maybe not. Your message is like UDP packet.
A surprising number of systems exhibit this behavior, sadly.
[+] [-] fizwhiz|3 years ago|reply
[+] [-] crazygringo|3 years ago|reply
> The way we achieve exactly-once delivery in practice is by faking it. Either the messages themselves should be idempotent, meaning they can be applied more than once without adverse effects, or we remove the need for idempotency through deduplication.
Honestly I don't get why this is "faking it" though. It seems like the author's definition of "exactly once" is so purist as to essentially be a strawman. This is "exactly once" in practice.
Like are there other people claiming that this purist version of exactly-once does exist?
[+] [-] jerf|3 years ago|reply
But that's not because you built a system that successfully delivers messages exactly once... you build a system that successfully processes messages exactly once, even if delivery occurs multiple times. The delivery still occurred multiple times. Even if your processing layer handled it, that may have other consequences worth understanding. Wrapping that up in a library may present a nice API for some programmer, but it doesn't solve the Byzantine General problem.
Whenever someone insists they can build Exactly Once with [mumble mumble mumble great tech here] I guarantee you there's a non-empty set of human readers coming away with the idea they can successfully create systems based on exactly-once delivery. After all, I built some code based on exactly-once delivery last night and it's working fine on my home ethernet even after I push billions of messages through it.
We're really better of pushing "There is no such thing as Exactly Once, and the way you deal with is [idempotence/id tracking/whatever]", not "Yes there is such a thing as Exactly Once delivery (see fine print about how I'm redefining this term)". The former produces more accurate models in human brains about what is going on and is more likely to be understood as a set of engineering tradeoffs. The latter seems to produce a lot of confusion and people not understanding that their "Exactly Once" solution isn't a magic total solution to the problem, but is in fact a particular point on the engineering tradeoff spectrum. In particular, the "exactly once" solutions can be the wrong choice for certain problems, like multiplayer game state updates, where it may be a lot more viable to think 1-or-0 and some timestamping and the ability to miss messages entirely and recover, rather than building an "exactly once" system.
[+] [-] doctor_eval|3 years ago|reply
The problem with this is similar to the problems with two-phase commit in distributed databases: there are unavoidable failure cases. Most of the time it works just fine, but if you write your application to depend on this impossible feature, and it fails - which, given enough time, will certainly happen - then the cleaning up the mess can be much more effort (and have much wider business implications) than simply dealing with the undesirable behaviour of reality in the first place.
Or to put it another way: exactly once semantics can never be reliably extracted away from the application, so if you need it, it needs to be part of your application.
[+] [-] tunesmith|3 years ago|reply
[+] [-] FooBarWidget|3 years ago|reply
[+] [-] lisper|3 years ago|reply
There is a third option besides idempotency and eliminating side-effects: give each message a unique ID, use that to keep a record of which messages have been processed, and don't process the same message twice.
[+] [-] aftbit|3 years ago|reply
[+] [-] xg15|3 years ago|reply
Isn't this basically what every "reliable" transport (TCP, HTTP3, message queues...) does?
[+] [-] btilly|3 years ago|reply
What determines which failure mode you get is whether the machine will failover to a machine that retries uncertain messages (giving you "at least once"), or it doesn't (giving you "at most once").
But, you say, why can't we have it failover to a machine that asks recipients what they have got and goes from there? Well we can, but the recipients don't know what messages are in the network still on their way to them.
But, you say, why not have the recipients disregard those inbound messages once they know about the replacement machine? Well you can do that, but now the *recipients* become machines whose job is to ensure the deduplication. And now *they* become the machine with a bad failure mode.
But, you say, does this not reduce the odds of failures? Why yes, it does. Which is why people do things like this. And there has to come a point where we accept SOME failure rate.
The alternative, well, read The Saddest Moment at https://scholar.harvard.edu/files/mickens/files/thesaddestmo... to see where madness leads.
[+] [-] danpalmer|3 years ago|reply
Now you need a database. Do you also need exactly-once delivery to the database? Now the service is no longer stateless too, which means scalability is a problem. Maybe you decide to make it just an in-process cache for de-duping, but that needs expiring and now the semantics are exactly-once within a given time period, and not across service restarts.
We can definitely solve this with higher level constructs, but they're not free, and they can introduce the same issues themselves.
> Isn't this basically what every "reliable" transport (TCP, HTTP3, message queues...) does?
TCP does this, to solve retries at the TCP layer. HTTP3 does this to solve issues at the HTTP3 layer. Message queues might solve this for the message queue, depends. But none of these solve the product level, or the user experience level, or other higher levels where these issues still crop up. They're issues you have to solve at every layer in some way.
[+] [-] hangonhn|3 years ago|reply
Also, isn't the assumption here that you will have a reliable connection to a shared DB?
You can have engineered solutions that is pragmatically close to deliver exactly once but it's not "pure" -- there are still scenarios, however unlikely, that it will fail.
[+] [-] oh_sigh|3 years ago|reply
[+] [-] nimih|3 years ago|reply
Edit: also, i should be fair and acknowledge that you're effectively describing idempotency (i'm guessing you already knew that ;P ), which the article's author eventually points out is a way to recover "exactly-once" semantics. The point, maybe, is that someone needs to explicitly do this somewhere; you can't really rely on your protocol to do it for you.
[+] [-] hn_urbit_thr123|3 years ago|reply
[+] [-] HiJon89|3 years ago|reply
If your processing doesn’t have any external side effects (make external API calls, send emails, charge credit cards, etc) then one option is to put your message queue in a relational DB alongside all your other data. Then you can pull a message, process it, write the results to the DB, and mark the message as processed all inside one big transaction. But not many use-cases can fit these constraints and it also has low throughout.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] yodsanklai|3 years ago|reply
These are impossibility results given various assumptions and requirements that may not hold in practice, or may be too restrictive. For one thing, i suppose we're pretty happy with a probabilistic solution as long as we can get the probability below an acceptable threshold.
[+] [-] jurschreuder|3 years ago|reply
If you get a letter with the same number you already read you don't even open it.
In Kafka this is also handled this way, events are numbered, and you request "latest" from the last one you processed.
In our eventstreaming it's also done like this, it may surprise you that Kafka is just an implementation of eventstreaming and not the same as.
With Kafka the offset of the consumers, or until which number it had already processed, used to be handled by the ZooKeeper but is migrated to the consumers.
There is on "exactly one consumer gets the message" done by the ZooKeeper, all consumers get all the messages from the topics they subscribed to. If you want exactly on exactly one consumer you should create different topics.
So not true.
[+] [-] bjornsing|3 years ago|reply
[+] [-] kevincox|3 years ago|reply
[+] [-] infamouscow|3 years ago|reply
[+] [-] dang|3 years ago|reply
You Cannot Have Exactly-Once Delivery - https://news.ycombinator.com/item?id=9266725 - March 2015 (55 comments)
[+] [-] AlbertCory|3 years ago|reply
[+] [-] qeternity|3 years ago|reply
[+] [-] candiddevmike|3 years ago|reply
[+] [-] piyh|3 years ago|reply
It was a system built by people that also didn't have distributed system experiences. It was not enjoyable at all, and at least once delivery was a consistent headache that required infrequent but time consuming remediation.
[+] [-] belter|3 years ago|reply
https://news.ycombinator.com/item?id=9266725
[+] [-] Nican|3 years ago|reply
Using a transaction to retrieve an item from the queue, and locking the row using "SELECT FOR UPDATE" and "SKIP LOCKED". Such that the row gets locked on read, and several workers can read from the table at the same time. Within the same transaction, other work is done, and everything gets committed to the database as a single atomic operation.
CockroachDB (a consensus/raft distributed database) recently added supported for SKIP LOCKED, but I still have yet to work on this idea.
[+] [-] rowls66|3 years ago|reply
[+] [-] kevincox|3 years ago|reply
Of course there are other concerns with using a database as a queue (mostly at high throughput) but for most cases it will work well.
[+] [-] scarface74|3 years ago|reply
I guess everyone has to make this mistake once in their career.
Funny enough, when I searched for “database as a queue”, my own comment from four years ago came up as the fourth result.
https://news.ycombinator.com/item?id=18774559
[+] [-] lxe|3 years ago|reply
> The way we achieve exactly-once delivery in practice is by faking it. Either the messages themselves should be idempotent, meaning they can be applied more than once without adverse effects, or we remove the need for idempotency through deduplication.
Doesn't have to be a "true" exactly-once. Just practically so. It's like saying "humans can't actually fly... they are faking it by using machines".
[+] [-] anonymousDan|3 years ago|reply
[+] [-] jverce|3 years ago|reply
--
[1] https://nats.io/
[2] https://docs.nats.io/using-nats/developer/develop_jetstream/...
[+] [-] twawaaay|3 years ago|reply
It is a bit like saying that we can't have straight lines. Of course, if you zoom in far enough to see individual atoms, every physical surface will look jagged. But in practical terms we can have straight lines and surfaces to a good enough approximation. It means specifying what "straight" means and figuring out how to measure it and how to produce "straight" according to specification and measurements.
Engineering is about knowing and making tradeoffs. Every device we have ever created has to contend with limitations of physical existence. Engineering is about accomplishing goals in presence of those limitations.
A person who says "you can't deliver a message exactly once" clearly lives in an idealised, theoretical world. I would urge to leave your ivory tower for a second and see how engineers in real world accomplish what you say is not possible.
I get that this knowledge is useful -- but don't publish it as gospel. "You cannot have exactly-once delivery" is true, but not the same kind of truth as "you can't travel faster than light". No engineering can get you to travel faster than light. But engineering can get you as close to exactly-once delivery as you want to the point where the original statement stops being meaningful for real life problems.
[+] [-] monsieurbanana|3 years ago|reply
Like someone else said, you can use at least once delivery and handle duplicate messages, but that's not quite the same as a distributed system guaranteeing that a message will be delivered exactly once.
[+] [-] bob1029|3 years ago|reply
Assuming you are talking about transferring between institutions, there is actually no single piece of software with this responsibility. The business processes are effectively what provide these guarantees (typically by way of another 3rd party).
In order to accomplish this, added latency (settlement time) is necessarily introduced into the process.
[+] [-] junon|3 years ago|reply
This isn't an opinion. This is a fact of distributed systems. An axiom, if you will.
[+] [-] urbandw311er|3 years ago|reply
It’s just an interesting piece of theorising.
I think your comment (particularly your 4th paragraph about ivory towers etc) comes across as overly harsh and a little aggressive.
[+] [-] skybrian|3 years ago|reply
Assuming the network will recover in time may be a reasonable assumption sometimes, though.
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] preseinger|3 years ago|reply