The combination of lazy evaluation and state mutation/side effects can be pretty difficult to reason about. For example, if you have a function that changes a global variable as a part of a lazy computation, once that function could have been called you have no way of knowing if or when that global variable will change in the future. If you have other functions that depend on the value of that variable, their future behavior is now much more challenging to reason about than in a strict language. You can also imagine something akin to a race condition in which there are multiple lazy computations which could eventually set that variable to different values and the actual sequence of state transitions depends entirely on the dependency order of a possibly unrelated piece of code. In practice, this means that in languages that are strict by default, lazy computations are often forced to run in order to reason about the code, rather than because the actual results of the computation are required.Since pure functions compute the same results under lazy or strict evaluation and require that any data dependencies they have are explicitly provided as inputs, they interact with lazy computations in a much more tractable way. This means that adding a strictness operator to a lazy language is much easier than adding a laziness operator a a strict language.
An alternate approach is what python did with generators where there is a data type for lazy computation, but it lives apart from the rest of the language, so it is mostly used for e.g. stream processing where a default-lazy approach is conceptually straightforward and is less likely to lead to extremely non-trivial control flow. This approach does, however, basically give up on having a laziness operator that will turn a strict computation into a lazy one.
No comments yet.