top | item 25273640

(no title)

jrbancel | 5 years ago

> hence you need to query _all_ the nodes or can end up in a split-brain situation and lose consistency

I am not sure this is true. It is prohibitively expensive to read from all replicas. All you need to ensure is that any replica has the latest state. See how Azure Storage does it [0]. Another example is Spanner [1] that uses timestamps and not consensus for consistent reads.

[0] https://sigops.org/s/conferences/sosp/2011/current/2011-Casc... [1] https://cloud.google.com/spanner/docs/whitepapers/life-of-re...

discuss

order

valenterry|5 years ago

Yes, so to quote from it [0]:

> This decoupling and targeting a specific set of faults allows our system to provide high availability and strong consistency in face of various classes of failures we see in practice.

What that says is: for the failures we have seen so far, we guarantee consistency and availability. But how about the other kind of failures? The answer is clear: strong consistency only is guaranteed for _certain expected_ errors, not in the general case. Hence generally speaking, strong consistency is _not_ guaranteed. As simple as that. Saying it is guaranteed "in practice" is, pardon, stupid wording.