Succinct and better? It’s fewer lines. It uses recursion. But it’s obtuse and illegible syntax diarrhea that takes more time to grok.
The for loop example above, while simple, is easily understood. A junior engineer could fix it. Is it faster? No. Is it easier to understand? Yes. I like the final example as well.
Look, I’m all for being clever when cleverness is needed but some of you JS/TS folks take it to the extreme. I want to be able to read my codebase, not perform archaeological studies. I have the same issue with Rust syntax as well in places.
I will praise the article on this though, optimizations do matter and having something execute orders faster will improve your overall experience. It doesn’t have to be cuneiform. If you must be clever, wrap it in an abstraction that is well documented because you don’t live forever and neither will your code, make it easy(ier) on the next person. While you can nest a recursive function in a tertiary statement coro, don’t. Fire is cool but also burns.
Btw, bun is blazingly fast. I look forward to more.
I’m surprised no one has brought up the fact that this code has quadratic time complexity when the underlying algorithm could be linear.
And the fact that it’s allocation behaviour is horrible, leaving behind ‘count-1’ garbage objects, count-squared garbage memory, and forcing the GC to chase after the algorithm.
I can accept this example as a contrived scenario to talk about TCO… but even in the presence of TCO one would not want to implement the function this way.
> Look, I’m all for being clever when cleverness is needed but some of you JS/TS folks take it to the extreme.
To be fair, this is very much a toy example meant to demonstrate tail recursion- although I do agree it's not a great example considering you need to understand the evaluation order of ternaries in order to understand why one version is tail-recursive and the other isn't.
That being said: generally speaking, one might argue that using recursion at all is "too clever" and algorithms should just stick to iteration altogether. ¯\_(ツ)_/¯
> While you can nest a recursive function in a tertiary statement coro, don’t.
If one is well-versed in functional programming, they would understand the above function just fine, plus it comes with all the benefits of functional versus imperative styles, which I will not repeat here as there are lots of articles on this. Perhaps JS devs should spend more time in functional codebases rather than imperative ones and they'll also get used to such code, and I for one am glad to see TCO making this style more efficient compared to traditional for-loops.
The ubiquitousness of c-style makes it more familiar. But that doesn't make it better syntactically. And of course what's faster or more memory efficient is an implementation detail ultimately, but moving away from the imperative style opens up the opportunity for new and interesting -- and potentially more grokkable, ultimately -- patterns. Rust is actually a great example of this, with things like match and pattern matching.
To use this in Bun, you’d have to start Bun with the environment variable “BUN_JSC_useDollarVM=1” and then $vm.createBuiltin(mySourceCodeString)
When using this intrinsic, if any of the arguments are incorrect or it cannot otherwise enable it, the entire process will probably crash. In debug builds of JSC it will have a nicer assertion failure but that is not enabled in release builds
I think this article actually does a really good job of explaining why TCO hasn't really been an important part of the JS ecosystem.
There's a lot of situations where recursive algorithms are really neat and clear. I don't know if this is the best example, but it shows the benefit of being able to split logic into a base case and a recurrence case.
But, in my experience, an algorithm that elegantly fits a recursive relationship is rarely one that naturally fits the tail-call paradigm. Often part of the benefit of recursion is that you can store state in the stack - the very thing you need to avoid to use TCO.
This means you often need to put in effort to create a tail recursive algorithm. But that often ends up looking a lot like the imperative case anyway - an accumulator outside the loop that you either mutate manually, or update in a tail call. And in my experience, the mutating, imperative version is usually then the easier to read and write (assuming you can keep mutations to a given scope, and not have that state leak all over the place). (In fairness, this might be more familiarity, though.)
In the light of this, what is the advantage of TCO? In functional languages without mutation, it's pretty important to allow for functions to act on arbitrarily-sized inputs without constantly growing the stack. But if we have mutation, it's really just a different way of writing the same code. And if that different way is generally less clear and almost always less performant, it probably isn't a very useful choice.
Which is why I think TCO hasn't really caught on in the other JS engines. It's a cool idea, and there's definitely a handful of cases where it's the useful way to go, but usually you'll be better served by writing things in the more traditional JS way.
Tail calls also matter for code in continuation-passing style (CPS).
This matters for some styles of programming (callbacks/promises/monads/etc), but it's also very useful for code emitted by compilers and DSLs. It's much easier to encode complex control flow with CPS than just with the basic constructs JavaScript supports.
Even for hand-written code it can be pretty important. It's easy to imagine rewriting simple tail-recursive functions in an imperative style, but it quickly gets much harder, especially if your problem naturally decomposes into mutually recursive functions: that is, instead of a function calling itself, you have multiple functions calling each other, often with a bunch of additional computation between each call. I've run into this when writing parsers as well as custom algorithms for constraint-satisfaction kinds of problems. (Which are pretty common!)
Regardless, it has been approved as part of JavaScript specification and Chrome folks refuse for political reasons to implement it, which given the market domination everyone has helped them achieve is a bummer.
> an algorithm that elegantly fits a recursive relationship is rarely one that naturally fits the tail-call paradigm.
That's because tail calls are structured gotos (with arguments). They're not really about recursion or even iteration.
> what is the advantage of [proper tail calls]?
There's a loss of expressiveness [1] without them, similar to the loss of expressiveness when there's no garbage collector.
> if [using proper tail calls] is almost always less performant, it probably isn't a very useful choice.
Don't assume PTC is less performant. There's no intrinsic reason for it to be. After all, the machine should be doing (slightly) less work: a goto instead of a call. Retrofitting it to an existing system can of course lead to a poor implementation, but it is not inherently a slow construct.
[1] Felleisen, Matthias. “On the Expressive Power of Programming Languages.” Science of Computer Programming, vol. 17, no. 1--3, 1991, pp. 35–75, https://doi.org/10.1016/0167-6423(91)90036-W .
I don’t think the example is particularly good. TCO is essential in functional programming, where recursion shines, but the example given is an impure function which mutates its input parameter by calling Array.prototype.push. Basically, this isn’t a demonstration of the paradigm for which TCO is required.
We’ll never know how big of a role TCO could have played in JavaScript over the past several years, because it hasn’t been available outside of some narrow contexts.
> Recursion takes up precious memory on the call stack! There may be commands to increase the call stack size for your program, but there is only so much the OS will allow. Memory on the heap is much less restricted.
This reasoning implies that tail call optimization is some sort of memory-segmentation workaround. This isn't accurate.
A (non-TCO) recursive approach is more expensive because it uses O(n) bookkeeping memory, whereas the iterative approach uses O(1) memory. Naive recursion will allocate a stack frame for each "iteration", whereas the loop approach will not allocate anything per iteration.
So, the iterative approach simply uses much less memory for large iteration counts; the placement of the memory in stack space or heap space isn't the relevant factor.
But without TCO, even if your function doesn't have any state of it's own, it will use memory on the call stack. I don't think anything about what they said implied memory segmentation was the problem.
It's pretty clear they mean each recusion increases the size of the call stack without TCO.
This is not actually about Tail Call Optimisation, which is more flexible and optional matter of optimisation, but about Proper Tail Calls, which are a guarantee made in the ECMAScript 6 specification (over implementer concerns objections)—in strict mode, calls in tail position must not create additional stack frames. This is the last piece of ECMAScript 6 that most engines haven’t implemented, because it’s rather controversial: it actually causes some performance problems, and makes debugging harder, and may have security issues (in 2016, Mozilla declared it impossible to implement across realm boundaries due to their security model).
https://github.com/tc39/proposal-ptc-syntax has a lot of useful information about it all, and a proposal to make it explicit in syntax, such as with `return continue …`, though I don’t believe that’s really gone anywhere. The practical situation is that this is the one part of the ECMAScript spec that most implementers ignore, and thus which you can’t depend on. Which says to me that it should either be made optional or be removed from the spec. Not sure if there are any other features similarly disregarded. ECMAScript specification policy was different in those days, they operated more independently of implementers. (HTML was like that once too—that’s roughly why WHATWG was formed, because the actual implementers weren’t happy with how things worked in W3C, so they took matters into their own hands.)
(Fun terminology problems here. The term TCO is commonly used for PTC, and PTC is very close to being a subset of TCO, but the mandatory stack frame elision which ruins debugging feels to me like it falls outside of TCO. In various situations, debuggers will mark things like “stack frame omitted” when they’ve optimised one out of existence, but you can generally compile things differently, or something like that, to prevent this. (Compiled/dynamic language variation showing up here.) But with PTC, it feels like the engine is kinda not even allowed to know that a stack frames may be absent. So I say PTC and TCO are a little distinct, though PTC is mostly just a subset of TCO. Reminds me of the terminology of tree-shaking versus dead code removal—where the former is essentially a subset of the latter, but that the effects are just slightly different, though I’d say it’s more slight in that case than this.)
It's nice, but the downside is that relying on TCO can make your Bun-tested code unexpectedly break with stack overflow error if it ever gets run on Deno or Node. So probably a good idea to comment this part of the code and make sure if it's in a library to similarly warn the callers somehow.
> [The tail call optimized] function is starting to look a lot like the orignal for loop solution. It’s not very succinct or elegant anymore.
(Brackets mine.)
I don't know about succinct but I definitely like how it looks a lot more. It's very readable, while the ternary operator one looks illegible at that length. It looks shoehorned in. It's the programming language equivalent of trying to mumble a whole sentence in two syllables.
TCO should be table stakes. An interpreter can do it by noticing that the next thing is a call to itself (the interpreter), a compiler by reorganising the stack frame before the jump.
Lots of stuff gets in the way. Destructors, varying type signatures, calling conventions. But for the case where it can happen, it should be considered an implementation error that it does not. Very like a space leak.
edit: this ^ was evidently unclear - tail call support in an interpreter means recognising when the interpreter is about to call the interpreter and jumping there, e.g. in an eval-apply setup, you use one frame for the eval-apply-eval-apply skeleton and only spawn more for evaluation of function arguments.
Tail call support in an compiler involves changing the calling convention to clean up before jump. At that point it's all jumps, whether back to the top of a loop or to some other basic block, because all the world is SSA to a pretty good approximation.
Neither of those makes any meaningful distinction between calls-to-self, calls-to-sibling, calls-to-indirect and so forth. The TCO is easier on self calls idea comes from languages where it is hacked before the compiler backend in a setting where goto isn't allowed to cross function boundaries. Some C code will have a label at the top named "self" or similar and use `goto self;` to force the equivalent of a tail call, some compiler stacks implement TCO by a mechanical version of the same hack and thus can't deal with it crossing between different functions.
edit2:
If you're compiling to a target with restrictions on calling convention that preclude cleanup before jump (and doesn't sort this out itself) you are out of luck. Pick a better target or live in the doomed world you've created. See e.g. https://v8.dev/blog/wasm-tail-call for some stuff on wasm trying to fix itself, I haven't kept up to date with whether they did or not.
TCO is about more than self-recursive calls -- you should be able to handle mutual recursion:
f = x => x > 0 ? g(x - 1) : 0;
g = x => x > 0 ? f(x - 1) : 0;
(These functions don't do anything other than stress-test the stack, then return 0.)
Note that if I changed this to, say, 1 + g(x - 1), this would no longer fall under the purview of tail recursion, where the last instruction before the return is a recursive call.
The usual workaround I learned in my SICP days was to add a second variable for accumulating any results. And, indeed, that's how the blog post rewrites the recursive function.
(The recursive one-liner is still wildly inefficient because JS doesn't implement arrays as linked lists, so the spread operator results in a copy for each recursive call, resulting in a quadratic solution. It could be made more efficient by using Array.prototype.push, but then it's no longer a pure function.)
the reason TCO is not implemented in V8, iirc, is because of developer experience and needing devtools to faithfully dump the call stack that hasnt been flattened, because that's what's expected of js DX.
does JSC's implementation solve this? do they exhaust the stack only when devtools are open, or smth? maybe they just keep the last 10 frames in memory for devtools? (that would make most sense to me)
The performance difference is not just about TCO, but about creating a new array on each iteration instead of reusing the old one... in Lisp, because it uses cons cells which share structure, appending is very efficient, so the recursive code runs as fast as it can get... not so much with JS arrays.
ECMAScript continues to evolve and I suspect that there would have been an effort to make the spread operator into a sort of cons in JavaScript. It could have worked for both arrays and objects. However, after seeing that the entry-level bar of proper tail calls couldn’t make it out the door for Chrome and V8, they might have decided not to push it and risk further divergence from a major implementation.
Isn't prepending the efficient operation? Appending would require mutation or copying, prepending only requires a single new cons cell.
Appending to a linked list is fine if you can mutate and keep a pointer to the tail, but then again, extending mutable arrays is also amortized O(1) assuming a reasonable allocation scheme.
> Languages that rely on recursion like LISPs often specify that TCO must be implemented in their language spec.
This is inaccurate. As far as I can tell, it's only Scheme specifications like r5rs that have a hard requirement on tail-call optimization. The Common Lisp specification makes no mention of tail-call optimization, and Clojure -- arguably the most widely used Lisp today -- does not have it either (but it has special forms and macros that let you write recursive code and produce trampolines, thunks, etc. to avoid blowing up the stack).
With that said, at least in the case of Common Lisp, it is true that many compilers will implement TCO, even though it is *not* required at all to conform to the specification.
Common Lisp is not a Lisp which 'relies on' recursion. It has various loop statements (DO, DOTIMES, DOLIST, ...), various mapping functions (map, mapcar, mapc, maphash, ...) and a complex LOOP. To prevent stack overflows those can all be built on top of a goto operator (tagbody + go). One of the reason was that compilation targets (cpus, virtual machines, other languages, ...) don't have native TCO support. For example the various Lisp Machines had no TCO support, and many C compilers did not support TCO for quite some time. C is a popular compilation target (KCL, AKCL, ECL, CLASP, CLICC, ... were (some are) all Lisp to C compilers).
> With that said, at least in the case of Common Lisp, it is true that many compilers will implement TCO, even though it is not required at all to conform to the specification.
The specification does not require it, it actually says nothing about it. TCO isn't even mentioned. Means, it also says nothing how a compiler should support it, if it wanted to. The language supports recursion, but something like tail recursion support is entirely an implementation specific optimization. Since TCO has some advantages (less stack space needed, possibly some faster loops, ...), many compilers implement it, but mostly no interpreter does it.
Also, most of them support GC, even though exactly nothing is said about it in the specification. ;-)
While working on effection@v3 (https://github.com/thefrontside/effection) we spent a bunch of time ensuring that our delimited continuations could handle deep recursive call stacks in Deno.
What’s worse is hitting maximum memory callstack exception is very tricky to catch and is not reliable across runtimes. So when a user hits it it can be tricky to track down.
So the optimization here is that instead of creating a new array every time you call the count function recursively, you're only modifying a single array. Thus the new array recursive function is significantly slower because we're storing all those arrays in memory when calling the count function recursively. Do I have that right?
I spent the first year or two of my career thinking I needed to Do Recursion because thats what the textbooks said and hey, the language supports tail calls!
I learned, in the sense you learn by touching a hot stove, that:
A) you can't guarantee the compiler sees it as a tail call and optimizes it.
B) you are now open to a stack overflow appearing in your software at any given point due to compiler changes / your own changes that confuse the compiler, etc.
Both Scala3 and Kotlin allow you to annotate functions with @tailrec which will give you a compiler warning if the compiler can’t optimize. And if you use the xfatal-warn compiler flag the Scala code won’t compile if your function is not tail call.
>"I spent the first year or two of my career thinking I needed to Do Recursion because thats what the textbooks said"
Being an old fart I've learned long ago to not follow self declared prophets. Recursion is a trade off that comes with the cost. Sometimes it makes sense to use it and sometimes (most of the times) it does not. You do what makes you and your customers happy.
[+] [-] reactordev|2 years ago|reply
How is:
Succinct and better? It’s fewer lines. It uses recursion. But it’s obtuse and illegible syntax diarrhea that takes more time to grok.The for loop example above, while simple, is easily understood. A junior engineer could fix it. Is it faster? No. Is it easier to understand? Yes. I like the final example as well.
Look, I’m all for being clever when cleverness is needed but some of you JS/TS folks take it to the extreme. I want to be able to read my codebase, not perform archaeological studies. I have the same issue with Rust syntax as well in places.
I will praise the article on this though, optimizations do matter and having something execute orders faster will improve your overall experience. It doesn’t have to be cuneiform. If you must be clever, wrap it in an abstraction that is well documented because you don’t live forever and neither will your code, make it easy(ier) on the next person. While you can nest a recursive function in a tertiary statement coro, don’t. Fire is cool but also burns.
Btw, bun is blazingly fast. I look forward to more.
[+] [-] kannanvijayan|2 years ago|reply
And the fact that it’s allocation behaviour is horrible, leaving behind ‘count-1’ garbage objects, count-squared garbage memory, and forcing the GC to chase after the algorithm.
I can accept this example as a contrived scenario to talk about TCO… but even in the presence of TCO one would not want to implement the function this way.
[+] [-] mhink|2 years ago|reply
To be fair, this is very much a toy example meant to demonstrate tail recursion- although I do agree it's not a great example considering you need to understand the evaluation order of ternaries in order to understand why one version is tail-recursive and the other isn't.
That being said: generally speaking, one might argue that using recursion at all is "too clever" and algorithms should just stick to iteration altogether. ¯\_(ツ)_/¯
> While you can nest a recursive function in a tertiary statement coro, don’t.
Man, why do you gotta call me out like this? :D
[+] [-] satvikpendem|2 years ago|reply
[+] [-] mitt_romney_12|2 years ago|reply
[+] [-] femiagbabiaka|2 years ago|reply
Don't be such a language grinch.
[+] [-] Jarred|2 years ago|reply
A neat thing about tail calls in JavaScriptCore: there’s an explicit bytecode intrinsic for it.
To use this in Bun, you’d have to start Bun with the environment variable “BUN_JSC_useDollarVM=1” and then $vm.createBuiltin(mySourceCodeString)When using this intrinsic, if any of the arguments are incorrect or it cannot otherwise enable it, the entire process will probably crash. In debug builds of JSC it will have a nicer assertion failure but that is not enabled in release builds
Example code: https://github.com/WebKit/WebKit/blob/17351231b4dedb62d81721...
also happy to answer any questions about Bun
[+] [-] leptons|2 years ago|reply
[+] [-] CodeGroyper|2 years ago|reply
What is the priority of app router integration for NextJS?
[+] [-] MrJohz|2 years ago|reply
There's a lot of situations where recursive algorithms are really neat and clear. I don't know if this is the best example, but it shows the benefit of being able to split logic into a base case and a recurrence case.
But, in my experience, an algorithm that elegantly fits a recursive relationship is rarely one that naturally fits the tail-call paradigm. Often part of the benefit of recursion is that you can store state in the stack - the very thing you need to avoid to use TCO.
This means you often need to put in effort to create a tail recursive algorithm. But that often ends up looking a lot like the imperative case anyway - an accumulator outside the loop that you either mutate manually, or update in a tail call. And in my experience, the mutating, imperative version is usually then the easier to read and write (assuming you can keep mutations to a given scope, and not have that state leak all over the place). (In fairness, this might be more familiarity, though.)
In the light of this, what is the advantage of TCO? In functional languages without mutation, it's pretty important to allow for functions to act on arbitrarily-sized inputs without constantly growing the stack. But if we have mutation, it's really just a different way of writing the same code. And if that different way is generally less clear and almost always less performant, it probably isn't a very useful choice.
Which is why I think TCO hasn't really caught on in the other JS engines. It's a cool idea, and there's definitely a handful of cases where it's the useful way to go, but usually you'll be better served by writing things in the more traditional JS way.
[+] [-] tikhonj|2 years ago|reply
This matters for some styles of programming (callbacks/promises/monads/etc), but it's also very useful for code emitted by compilers and DSLs. It's much easier to encode complex control flow with CPS than just with the basic constructs JavaScript supports.
Even for hand-written code it can be pretty important. It's easy to imagine rewriting simple tail-recursive functions in an imperative style, but it quickly gets much harder, especially if your problem naturally decomposes into mutually recursive functions: that is, instead of a function calling itself, you have multiple functions calling each other, often with a bunch of additional computation between each call. I've run into this when writing parsers as well as custom algorithms for constraint-satisfaction kinds of problems. (Which are pretty common!)
[+] [-] pjmlp|2 years ago|reply
[+] [-] tonyg|2 years ago|reply
That's because tail calls are structured gotos (with arguments). They're not really about recursion or even iteration.
> what is the advantage of [proper tail calls]?
There's a loss of expressiveness [1] without them, similar to the loss of expressiveness when there's no garbage collector.
> if [using proper tail calls] is almost always less performant, it probably isn't a very useful choice.
Don't assume PTC is less performant. There's no intrinsic reason for it to be. After all, the machine should be doing (slightly) less work: a goto instead of a call. Retrofitting it to an existing system can of course lead to a poor implementation, but it is not inherently a slow construct.
[1] Felleisen, Matthias. “On the Expressive Power of Programming Languages.” Science of Computer Programming, vol. 17, no. 1--3, 1991, pp. 35–75, https://doi.org/10.1016/0167-6423(91)90036-W .
[+] [-] odyssey7|2 years ago|reply
We’ll never know how big of a role TCO could have played in JavaScript over the past several years, because it hasn’t been available outside of some narrow contexts.
[+] [-] akoboldfrying|2 years ago|reply
Nicely put. TCO is a bit like a guitar solo: more fun to implement than it is useful/enjoyable to the intended recipients.
[+] [-] pcl|2 years ago|reply
This reasoning implies that tail call optimization is some sort of memory-segmentation workaround. This isn't accurate.
A (non-TCO) recursive approach is more expensive because it uses O(n) bookkeeping memory, whereas the iterative approach uses O(1) memory. Naive recursion will allocate a stack frame for each "iteration", whereas the loop approach will not allocate anything per iteration.
So, the iterative approach simply uses much less memory for large iteration counts; the placement of the memory in stack space or heap space isn't the relevant factor.
[+] [-] fyrn_|2 years ago|reply
[+] [-] chrismorgan|2 years ago|reply
https://github.com/tc39/proposal-ptc-syntax has a lot of useful information about it all, and a proposal to make it explicit in syntax, such as with `return continue …`, though I don’t believe that’s really gone anywhere. The practical situation is that this is the one part of the ECMAScript spec that most implementers ignore, and thus which you can’t depend on. Which says to me that it should either be made optional or be removed from the spec. Not sure if there are any other features similarly disregarded. ECMAScript specification policy was different in those days, they operated more independently of implementers. (HTML was like that once too—that’s roughly why WHATWG was formed, because the actual implementers weren’t happy with how things worked in W3C, so they took matters into their own hands.)
(Fun terminology problems here. The term TCO is commonly used for PTC, and PTC is very close to being a subset of TCO, but the mandatory stack frame elision which ruins debugging feels to me like it falls outside of TCO. In various situations, debuggers will mark things like “stack frame omitted” when they’ve optimised one out of existence, but you can generally compile things differently, or something like that, to prevent this. (Compiled/dynamic language variation showing up here.) But with PTC, it feels like the engine is kinda not even allowed to know that a stack frames may be absent. So I say PTC and TCO are a little distinct, though PTC is mostly just a subset of TCO. Reminds me of the terminology of tree-shaking versus dead code removal—where the former is essentially a subset of the latter, but that the effects are just slightly different, though I’d say it’s more slight in that case than this.)
[+] [-] imjonse|2 years ago|reply
https://neopythonic.blogspot.com/2009/04/final-words-on-tail...
[+] [-] sbarre|2 years ago|reply
It makes the article more interesting!
[+] [-] imjonse|2 years ago|reply
[+] [-] midtake|2 years ago|reply
(Brackets mine.)
I don't know about succinct but I definitely like how it looks a lot more. It's very readable, while the ternary operator one looks illegible at that length. It looks shoehorned in. It's the programming language equivalent of trying to mumble a whole sentence in two syllables.
[+] [-] k__|2 years ago|reply
[+] [-] nsonha|2 years ago|reply
[+] [-] JonChesterfield|2 years ago|reply
Lots of stuff gets in the way. Destructors, varying type signatures, calling conventions. But for the case where it can happen, it should be considered an implementation error that it does not. Very like a space leak.
edit: this ^ was evidently unclear - tail call support in an interpreter means recognising when the interpreter is about to call the interpreter and jumping there, e.g. in an eval-apply setup, you use one frame for the eval-apply-eval-apply skeleton and only spawn more for evaluation of function arguments.
Tail call support in an compiler involves changing the calling convention to clean up before jump. At that point it's all jumps, whether back to the top of a loop or to some other basic block, because all the world is SSA to a pretty good approximation.
Neither of those makes any meaningful distinction between calls-to-self, calls-to-sibling, calls-to-indirect and so forth. The TCO is easier on self calls idea comes from languages where it is hacked before the compiler backend in a setting where goto isn't allowed to cross function boundaries. Some C code will have a label at the top named "self" or similar and use `goto self;` to force the equivalent of a tail call, some compiler stacks implement TCO by a mechanical version of the same hack and thus can't deal with it crossing between different functions.
edit2: If you're compiling to a target with restrictions on calling convention that preclude cleanup before jump (and doesn't sort this out itself) you are out of luck. Pick a better target or live in the doomed world you've created. See e.g. https://v8.dev/blog/wasm-tail-call for some stuff on wasm trying to fix itself, I haven't kept up to date with whether they did or not.
[+] [-] vitus|2 years ago|reply
Note that if I changed this to, say, 1 + g(x - 1), this would no longer fall under the purview of tail recursion, where the last instruction before the return is a recursive call.
The usual workaround I learned in my SICP days was to add a second variable for accumulating any results. And, indeed, that's how the blog post rewrites the recursive function.
(The recursive one-liner is still wildly inefficient because JS doesn't implement arrays as linked lists, so the spread operator results in a copy for each recursive call, resulting in a quadratic solution. It could be made more efficient by using Array.prototype.push, but then it's no longer a pure function.)
[+] [-] leeoniya|2 years ago|reply
does JSC's implementation solve this? do they exhaust the stack only when devtools are open, or smth? maybe they just keep the last 10 frames in memory for devtools? (that would make most sense to me)
[+] [-] brabel|2 years ago|reply
[+] [-] odyssey7|2 years ago|reply
[+] [-] ColonelPhantom|2 years ago|reply
Appending to a linked list is fine if you can mutate and keep a pointer to the tail, but then again, extending mutable arrays is also amortized O(1) assuming a reasonable allocation scheme.
[+] [-] pjmlp|2 years ago|reply
We are decades away when lists were the only option.
[+] [-] koito17|2 years ago|reply
This is inaccurate. As far as I can tell, it's only Scheme specifications like r5rs that have a hard requirement on tail-call optimization. The Common Lisp specification makes no mention of tail-call optimization, and Clojure -- arguably the most widely used Lisp today -- does not have it either (but it has special forms and macros that let you write recursive code and produce trampolines, thunks, etc. to avoid blowing up the stack).
With that said, at least in the case of Common Lisp, it is true that many compilers will implement TCO, even though it is *not* required at all to conform to the specification.
[+] [-] lispm|2 years ago|reply
> With that said, at least in the case of Common Lisp, it is true that many compilers will implement TCO, even though it is not required at all to conform to the specification.
The specification does not require it, it actually says nothing about it. TCO isn't even mentioned. Means, it also says nothing how a compiler should support it, if it wanted to. The language supports recursion, but something like tail recursion support is entirely an implementation specific optimization. Since TCO has some advantages (less stack space needed, possibly some faster loops, ...), many compilers implement it, but mostly no interpreter does it.
Also, most of them support GC, even though exactly nothing is said about it in the specification. ;-)
[+] [-] qudat|2 years ago|reply
PR: https://github.com/thefrontside/continuation/pull/11
TCO would have definitely simplified this issue.
What’s worse is hitting maximum memory callstack exception is very tricky to catch and is not reliable across runtimes. So when a user hits it it can be tricky to track down.
[+] [-] kosolam|2 years ago|reply
[+] [-] anonymoushn|2 years ago|reply
[+] [-] jart|2 years ago|reply
[+] [-] 65|2 years ago|reply
[+] [-] cj|2 years ago|reply
Or the cost of running a company’s architecture on a “new” (not really new, but quickly evolving re: frameworks) JavaScript.
[+] [-] snissn|2 years ago|reply
[+] [-] refulgentis|2 years ago|reply
I learned, in the sense you learn by touching a hot stove, that:
A) you can't guarantee the compiler sees it as a tail call and optimizes it.
B) you are now open to a stack overflow appearing in your software at any given point due to compiler changes / your own changes that confuse the compiler, etc.
C) qed, don't do tail calls.
[+] [-] kriiuuu|2 years ago|reply
[+] [-] timw4mail|2 years ago|reply
[+] [-] FpUser|2 years ago|reply
Being an old fart I've learned long ago to not follow self declared prophets. Recursion is a trade off that comes with the cost. Sometimes it makes sense to use it and sometimes (most of the times) it does not. You do what makes you and your customers happy.