(no title)
madmax96 | 2 years ago
Proofs are good evidence that pure functions are easier to reason about. Many proof assistants (Coq, Lean, F*) use the Calculus of Inductive Constructions, a language that only has pure, total functions, as their theoretical foundation. The fact that state of the art tools to reason about programs use pure functions is a a pretty good hint that pure functions are a good tool to reason about behavior. At least, they're the best way we have so far.
This is because of referential transparency. If I see `f n` in a language with pure functions, I can simply lookup the definition of `f` and copy/paste it in the call site with all occurrences of `f`'s parameter replaced with `n`. I can simplify the function as far as possible. Not so in an imperative language. There could be global variables whose state matters. There could be aliasing that changes the behavior of `f`. To actually understand what the imperative version of `f` does, I have to trace the execution of `f`. In the worst case, __every time__ I use `f` I must repeat this work.
wredue|2 years ago
I don’t really accept “this group of people who’s heads are super far up the ‘pure functions’ ass choose purity for their solutions” as “evidence” that purity is better.
I’m not saying that purity is bad by any stretch. I just consider it a tool that is occasionally useful. For methods modifying internal state, I think you’ll have a hard time with the assertion that “purity is easier to reason about”.
madmax96|2 years ago
Modeling the method that modifies internal state as a function from old state to new state is the simplest way to accomplish this goal. I.e., preconditions and postconditions.