(no title)
oggy | 3 years ago
What I'll say is a simplification, but I'd describe TLA+ as a one-trick pony, that does that one trick very well. Roughly, you have to structure the model of your algorithm as a transition system, described using a predicate on "these are the legal initial states", and another predicate over two states that says how you can move from one state to the next. To describe these predicates, you get a fairly simple but expressive language at your disposal (a TLA cheat sheet probably fits a single page). This format lends itself well to describing high-level ideas of distributed protocols or concurrent algorithms (roughly what you'd write as pseudocode for those protocols/algorithms). You can then use that same predicate language, plus some additional operators to talk about time (e.g., you can say things like "never" or "eventually"), to specify what your system should do. Finally, you get an automated analysis tool (actually two tools these days, TLC and Apalache), that checks whether your model satisfies the specification. The beauty is that the checking is push-button - after you've finished writing the model and the spec, your work is largely complete. The analysis will have severe limitations; your model can't be too big, and it has to be finite (e.g., if you're writing a model of a distributed protocol, you have to limit the analysis to settings with a handful of nodes), but in practice even this limited analysis weeds out most of the bugs.
Isabelle and Coq are theorem provers (there's also a bunch of other theorem provers). That means, you have to define whatever you're modeling as a mathematical object, and then you go off and prove stuff about it. They're both extremely flexible - you can model a transition system just like TLA does, but you can also talk about any kind of mathematics (algebra, statistics, financial math, whatever). You can also encode programming language semantics, as well as program logics, that allow you both model actual C or OCaml or whatever code, and to specify properties about programs (as mathematical theorems) and prove them in a more comfortable way using program logics. The analysis (which corresponds to proving a theorem) is generally not limited (e.g., you prove a distributed protocol correct for any number of nodes).
The snag is that in Isabelle or Coq the proof is done in a largely manual way (you have to sort of type in the argument by hand, often in excruciating detail). If you want to verify a program with N lines of code, you'll typically end up writing 5-20 times N lines of proof to convince the theorem prover of the correctness of the said program. But to do this, you'll generally have a large library of already proved theorems (6-7 years ago I counted around 200k theorems available for Isabelle), and you will generally have a "metaprogramming language" at your disposal to write your own analysis (i.e., proof) procedures.
For academics, especially programming languages and formal methods people, their work is often writing new programming logics, or new proving procedures, or developing whole new theories. Theorem provers lend themselves better for that kind of work, as well as pushing boundaries, such as doing code-level analysis. But for engineering, TLA can be the 80/20 solution in many cases.
anonymousDan|3 years ago