Mike's one of the more fascinating people around. One of the only people ever to receive a discharge from the navy as a conscientious objector after he became a pacifist.
Since you're using the auto-tuned index for FLANN it's going to build a large number of indexes to try and find the best performing one, which can take a long time. Properly tuned I would expect FLANN to be much faster (though that's expected since it doesn't do exact searches). Also benchmarking the index build time together with query times doesn't make sense to me since depending on the application you may not care how long it takes to build the index.
"I’d guess that it took about 10 times as much work to create my Haskell-based cover tree than it would have taken to create a similar C-based implementation. (I’m roughly as proficient in each language.)"
Interesting since I would imagine implementing something in C to take much longer than doing the same in something like Python or Java. On the other hand, I suppose getting something to work is not the same as getting it to run fast.
He says "Because Haskell code is so high level, it requires aggressive compiler optimizations to perform well."
I think you are basically right. It is easier to make something correct in Haskell, it is easier to make something reasonably fast in Haskell too (generally) but in specific circumstances when you want max performance and care about the assembly or intermediate code output then it becomes easier in a low level language like C to do the performance optimization.
Having said that the 10* effort is an investment if you get a reusable library and will pay off because the rest of your code is in Haskell not C!
This algorithm operates on trees. But I've always wondered how one can create efficient algorithms operating on general graphs in a functional language such as Haskell?
For example, how would one write a fast version of Dijkstra's shortest path algorithm in Haskell?
Not that i'm the proshit haskell expert, but I don't think that GHC does the most amazing job in applying optimizations and I definitely think its performance paradigm is really hard to reason about.
Still, it was a nice breakdown of a real non-toy problem in haskell!
I've heard quite the opposite. I've heard that GHC is state of the art in the depth and breadth of optimizations it can perform. I'm struggling to remember if the context was "of everything" or just within pure functional programming.
ximeng|10 years ago
tdees40|10 years ago
dfbrown|10 years ago
rifung|10 years ago
Interesting since I would imagine implementing something in C to take much longer than doing the same in something like Python or Java. On the other hand, I suppose getting something to work is not the same as getting it to run fast.
bbcbasic|10 years ago
I think you are basically right. It is easier to make something correct in Haskell, it is easier to make something reasonably fast in Haskell too (generally) but in specific circumstances when you want max performance and care about the assembly or intermediate code output then it becomes easier in a low level language like C to do the performance optimization.
Having said that the 10* effort is an investment if you get a reusable library and will pay off because the rest of your code is in Haskell not C!
amelius|10 years ago
For example, how would one write a fast version of Dijkstra's shortest path algorithm in Haskell?
tel|10 years ago
It's not a sin, just something to avoid when not strictly necessary.
http://jspha.com/posts/mutable_algorithms_in_immutable_langu...
http://jspha.com/posts/mutable_algorithms_in_immutable_langu...
http://jspha.com/posts/mutable_algorithms_in_immutable_langu...
dottedmag|10 years ago
airza|10 years ago
Still, it was a nice breakdown of a real non-toy problem in haskell!
codygman|10 years ago
birdsbolt|10 years ago
thomasahle|10 years ago