(no title)
sbazerque | 1 year ago
> A program is monotonic if, when you run it on a subset of its inputs, you get a subset of its outputs. As you run it on more data, the set of true things may grow, but it never shrinks.
Yeah, this framework seems powerful.
Something I find interesting is that you can get monotonic (and therefore coordination-free) relaxations of arbitrary problems. In extremis, you can derive a relaxed version P' thus
P'(S) = {<s, P(s)> | s ⊆ S}
and now
P'(S) ⊆ P'(T) if S ⊆ T for _any_ (well defined) P
This seems tautological but in some cases a relaxed version is good enough: it gives you convergence and eventual consistency in a coordination-free setting, at the cost of maybe having to roll back some results. And when it doesn't, it gives you a coherent model of what to make of the situation until coordination yields a definitive answer.
I wrote about this idea here: https://www.hyperhyperspace.org/report.html#conflict-resolut...
But that was like last week, haven't really put this in practice yet.
In those examples what is being processed are partially-ordered operational logs, but it's essentially the same (just that whenever S ⊆ T there, what you're seeing is an extension of an op log, which is a bit more intuitive).
j-pb|1 year ago
User23|1 year ago
The logic for taming unbounded nondeterminism has been around for decades though. As Dijkstra and Scholten admit, it’s basically just applied lattice theory.
In fact, at a glance, this paper appears to be building on that foundation. It’s not hard to see how monotonicity makes reasoning about nondeterminism considerably more manageable!
sbazerque|1 year ago