What I don't understand with STCs is, how would I, as a developer, decide when to use them and when not?
As program correctness is concerned, tail calls and recursive calls should behave exactly the same (except stack use), so going by correctness alone, either the choice doesn't matter at all or tail calls are preferable. So the logical thing to do would be to always use tail calls.
The post goes on to list a number of disadvantages, but those seem to be either debugging concerns or platform-specific implementation concerns.
The former doesn't seem to be something that should be solved in the code - the document itself lists a number of solutions. The latter is not something that I as developer could judge as I likely don't have knowledge about the specific implementation of the platform the code is running on (if I know the platform in advance at all)
So it seems to me, STC shifts the problem of deciding to the developer even though the developer has no good tools to actually solve it.
I think that TCO (and similar optimisations) are a bane to most junior developers. I vividly remember pulling my hair out when trying to single step through optimised C++ code early in my career. But in my experience, all good developers come to this point where they realise -- Hey, I don't need to single step through this code. I can read it and understand what it is supposed to do. These days, with the popularity of techniques like unit testing, debuggers are not really necessary. I've spent the last 4 1/2 years writing ruby and JS code and I don't even know how to use the various debuggers -- never needed to.
So I'm with you. Just give me TCO every single time -- or give it to me never. Optimising it by hand is not that hard, and it's what most non-functional programmers do most of the time anyway. You'll see loops where recursion would be clearer, but then when you look at it hard you'll realise that it's just the TCO optimised version of the same code.
What's the best way to benchmark perf optimizations in JavaScript. I see lot of articles like this with no time and memory usage stats.
I would like to hear an answer from someone who has experience using gdb or visual studio while studying performance. Whenever I use the chrome debugger to time anything involving async calls and recursion I have this uncomfortable feeling I am not getting it right. With gdb or visual studio C/C# I always feel like I know what is going on.
Having worked on this pretty extensively, my preferred method is: (1) make two implementations of the function I'm trying to speed up, (2) add a wrapper so half of that function's calls get sent to each version, and then (3) profile in Chrome and compare the results (particularly the time spent in the two alternate implementations, but overall as well).
I know that's not the gdb-like experience you're looking for, but as far as I've found it's the best approach. The issue is that modern JS engines achieve their speed by tracking what happens at runtime and dynamically re-optimizing hot functions - so microbenchmarks are largely meaningless, and the performance of a given function can hugely affected by code that's far away.
(The above is for everyday. For extreme deep-diving, one can use JS engine tools to see what kind of internal representation your code has been compiled into after it got optimized. In chrome this is done with IRHydra - http://mrale.ph/irhydra/2/ .)
Google has a lot of content, for example "Chrome DevTools => Analyze Runtime Performance =>
Get Started With Analyzing Runtime Performance" [0].
But an important piece of advice is at the bottom of one of their pages [1]: "Avoid micro-optimizing your JavaScript". Apart from their argument there, keep in mind you are programming for a number of very different runtime environments. An optimization that gives you a big boost in one implementation may slow you down on another one. That is true not just between various vendors but also among runtime versions from the same vendor.
when doing micro benchmarks in JS you are likely to make the wrong conclutions because of JS engine optimizations for example removing code where the result is not used.
the proper way to optimize is to identify bottlenecks and look at cpu and memory profile when running in production. then optimize the the code that would bring the most "bang for the bucks" likely to be found at the top of a flame graph.
when doing the actual code optimizations remove unnecesary abstractions by inlining functions, use native objects, avoid memory, and preallocate/fixed buffers to get rid of GC stops by not creating new objects.
JavaScript TCO can't be feature detected so how do you know when tail calls are safe to use in front end or library code that can run on a variety of clients?
[+] [-] xg15|9 years ago|reply
As program correctness is concerned, tail calls and recursive calls should behave exactly the same (except stack use), so going by correctness alone, either the choice doesn't matter at all or tail calls are preferable. So the logical thing to do would be to always use tail calls.
The post goes on to list a number of disadvantages, but those seem to be either debugging concerns or platform-specific implementation concerns.
The former doesn't seem to be something that should be solved in the code - the document itself lists a number of solutions. The latter is not something that I as developer could judge as I likely don't have knowledge about the specific implementation of the platform the code is running on (if I know the platform in advance at all)
So it seems to me, STC shifts the problem of deciding to the developer even though the developer has no good tools to actually solve it.
[+] [-] mikekchar|9 years ago|reply
So I'm with you. Just give me TCO every single time -- or give it to me never. Optimising it by hand is not that hard, and it's what most non-functional programmers do most of the time anyway. You'll see loops where recursion would be clearer, but then when you look at it hard you'll realise that it's just the TCO optimised version of the same code.
[+] [-] ecma_horse|9 years ago|reply
Otherwise, you may decide to just use recursion everywhere and then your stack overflows when your N is bigger than expected.
When in doubt, don't use recursion and don't expect that you get TCO.
[+] [-] holmberd|9 years ago|reply
Small summary code: http://paste.ubuntu.com/24568118/
[+] [-] devrandomguy|9 years ago|reply
http://ramdajs.com/
[+] [-] Kholo|9 years ago|reply
[+] [-] fenomas|9 years ago|reply
I know that's not the gdb-like experience you're looking for, but as far as I've found it's the best approach. The issue is that modern JS engines achieve their speed by tracking what happens at runtime and dynamically re-optimizing hot functions - so microbenchmarks are largely meaningless, and the performance of a given function can hugely affected by code that's far away.
(The above is for everyday. For extreme deep-diving, one can use JS engine tools to see what kind of internal representation your code has been compiled into after it got optimized. In chrome this is done with IRHydra - http://mrale.ph/irhydra/2/ .)
[+] [-] IIIIIIIIIIII|9 years ago|reply
But an important piece of advice is at the bottom of one of their pages [1]: "Avoid micro-optimizing your JavaScript". Apart from their argument there, keep in mind you are programming for a number of very different runtime environments. An optimization that gives you a big boost in one implementation may slow you down on another one. That is true not just between various vendors but also among runtime versions from the same vendor.
[0] https://developers.google.com/web/tools/chrome-devtools/eval...
[1] https://developers.google.com/web/fundamentals/performance/r...
[+] [-] z3t4|9 years ago|reply
[+] [-] jekrb|9 years ago|reply
https://github.com/mafintosh/nanobench
For debugging llnode is super good:
https://github.com/nodejs/llnode
[+] [-] cpeterso|9 years ago|reply
[+] [-] ecma_horse|9 years ago|reply
[+] [-] wcummings|9 years ago|reply
[+] [-] olliej|9 years ago|reply