I've already posted this on /r/haskell on reddit but since the article turned up here as well, I might just as well post this again. Since we're talking optimization I had a short look at the C code and found it to be a very naive implementation and have managed to cut the execution time quite a bit by employing a sort of memoization. Once you get to a number of the sequence which you have already seen before, the length from there to 1 remains the same as before.So here is the code: http://lpaste.net/96397
Here are my measurements:
time ./original 1000000
(329, 837799)
real 0m0.183s
user 0m0.180s
sys 0m0.001s
time ./new 1000000
(329, 837799)
real 0m0.017s
user 0m0.015s
sys 0m0.001s
deletes|12 years ago
anon1385|12 years ago
I would suggest it's best to show (at least) 4 versions of the code to compare two language: the naive implementation in both languages, and the really optimised version in both languages (plus intermediate levels of optimisation if you feel dedicated). That gives a picture of how fast the typical code one would write will be, as well as the potential for optimisation if you are experiencing bottlenecks, and how much effort it takes to optimise the code.
[1] http://augustss.blogspot.co.uk/2007/08/quicksort-in-haskell-...
ejgottl|12 years ago
tsahyt|12 years ago
What we can do, is measure complexity empirically and extrapolate something from there. I haven't done a lot of tests with this code but to me it seemed like it would scale the same as the original one but had a much lower constant.