top | item 22412574

(no title)

pdexter | 6 years ago

That sounds more like an implementation detail, in most cases. By this metric Haskell wouldn't be a functional language.

discuss

order

nybble41|6 years ago

It's a very important implementation detail. Explicit loops tend to imply mutation, which is contrary to idiomatic functional programming. Recursive calls don't require mutation but do require TCO to achieve equivalent space complexity. Constant-factor optimizations are one thing but failing to perform TCO turns constant-space algorithms into linear-space algorithms (or linear ones into quadratic, etc.). It's less a matter of "optimizing" the calls and more a matter of not wasting limited stack space on data which is clearly not required to execute the remainder of the program. One might as well label the practice of freeing stack frames when a function returns "Function Return Optimization" (FRO) and consider it a mere "implementation detail". After all, wouldn't it be much simpler to grab new memory every time the program needs some storage space and never bother with cleaning it up? It would certainly make debugging easier with all those old variables retained for the life of the program and not constantly overwritten by new data. However, programs written for a language without guaranteed "FRO" would look very different from normal programs, much as programs designed to compensate for the lack of guaranteed TCO look very different from idiomatic functional programs.

Haskell uses a different (data-centric, non-strict) evaluation model where recursive definitions don't result in recursive calls, so traditional TCO isn't as relevant. Recursion is used very heavily in Haskell—which has no first-class looping constructs—but the resulting programs generally do not require large stacks. It's not unusual to be able to run even large Haskell programs with a 1 KiB maximum stack size (+RTS -K1k). Space leaks are possible, of course, but they take place in the heap.

lispm|6 years ago

Common Lisp was designed without requiring TCO because:

* various platforms don't support TCO. It was designed such that it can be implemented by a simple non-TCO interpreter, transpiled to a non-TCO C compiler, compiled to a non-TCO Lisp Machine CPU, or to a non-TCO virtual machine (like the JVM). Many languages don't support TCO on the JVM and may only implement explicit tail recursion or have a compiler detecting tail recursion - which is far from supporting TCO. Thus a portable conforming Common Lisp program will run in ABCL (a full Common Lisp implementation on top of the JVM) - because it will not depend on TCO to not blow up the stack, or similar.

* another reason was that Lisp has a bunch of features with don't work that well with TCO. For example Lisp always supported various dynamic scoping constructs and made extensive use of those - something which Scheme in its core does not, but provides via libraries or language extensions. Using dynamic scoping constructs makes TCO more difficult, may require a different language design, etc.

chalst|6 years ago

Haskell implements call-by-need reduction, in which all calls are tail calls.