top | item 1833203

What algorithm blows your mind? (Reddit compsci)

129 points| barrkel | 15 years ago |reddit.com | reply

66 comments

order
[+] jgrahamc|15 years ago|reply
I've always found error detecting/correcting codes to be lovely. A great example, that's easy to understand, is the Hamming(7,4) code.

http://en.wikipedia.org/wiki/Hamming(7,4)

I got interested in these codes because of the problems involved with deep space communication.

http://en.wikipedia.org/wiki/Error_detection_and_correction#...

[+] icegreentea|15 years ago|reply
I learned Hamming code in school this summer (digital systems) .It was probably the single most magical thing (even more than flipflops! Loops make memory!) I was exposed to that class.

I remember upon learning it someone went "wow, who came up with that". The prof answered dryly "some genius named Hamming".

[+] mayank|15 years ago|reply
All very nice suggestions below, but I found suffix trees missing: http://en.wikipedia.org/wiki/Suffix_tree

And the reasons why it blows my mind:

(1) Knuth called it "Algorithm of the year 1973" (possibly because it beat an intuitive lower bound that he had for a problem I can't remember).

(2) It's relatively new for a "core", first-principles algorithm. Ukkonen's algorithm is from 1995, although there are earlier versions.

(3) This BOOK is largely devoted to the many, many applications of suffix trees: http://www.amazon.com/Algorithms-Strings-Trees-Sequences-Com... It should be required reading for anyone interested in advanced algorithms.

(4) Longest common substring in linear time.

(5) Lempel-Ziv decomposition in linear time.

(6) Longest repeated substrings in linear time.

And too many more to list.

[+] nl|15 years ago|reply
Bloom Filters: http://en.wikipedia.org/wiki/Bloom_filter

Sort of a probabilistic hash, where you trade space for accuracy. But it's also like a memory function - it can remember if it has seen a piece of data before.

[+] daeken|15 years ago|reply
What I love about bloom filters is that when checking to see if something is in a bloom filter, you can only get false positives, not false negatives. That property is just awesome to me.
[+] pjscott|15 years ago|reply
I love that they're fast enough to use in hardware. Some of the newer schemes for hardware-accelerated transactional memory use Bloom filters to probabilistically detect when two transactions are using the same memory addresses. This is so much easier and more efficient than previous approaches, which had to use content associative memories and expensive broadcast operations.
[+] gamache|15 years ago|reply
Singular value decomposition. It continually amazes me how many problems -- from noise reduction to Netflix prediction -- SVD can be usefully applied to.

http://en.wikipedia.org/wiki/Singular_value_decomposition

[+] cdavid|15 years ago|reply
The core ideas behind SVD and eigen values are very rich and profound. It has always fascinated me when I studying maths in undergrad.

Someone else already mentioned compressed sensing, which expands on some of those ideas. Terrence Tao had a pretty good presentation on the topic: http://terrytao.files.wordpress.com/2009/08/compressed-sensi...

[+] rdtsc|15 years ago|reply
I still do not completely understand it. I see the math, but I see it as symbols that I can memorize, rather than something intuitive (like when I see a moving cursor through a state tree during a binary search for instance).
[+] jefffoster|15 years ago|reply
Compressed Sensing is exciting - using a sequence of low res images to obtain a higher resolution sample. It was discussed in Wired (http://www.wired.com/magazine/2010/02/ff_algorithm/all/1) together with a compelling example (minimizing the time of a young patient in an MRI machine)
[+] thalur|15 years ago|reply
That sounds far too good to be true. I guess if you don't care about the fine details then its a decent technique, but to take the MRI example, what if the thing that was wrong was only visible in those small details?
[+] anamax|15 years ago|reply
Compressed sensing was the subject of a recent talk which can be viewed at ee380.stanford.edu as well as via itunes and youtube.
[+] endtime|15 years ago|reply
For me, it's boosting. Basically, if you make the assumption that your weak classifier can achieve X% correct on any random distribution over a data set, then you can create an ensemble of weak learners that together get Y% correct on the same set, where Y > X.

Question 3 here will walk you through the proof: http://bit.ly/cQ03na

[+] Figs|15 years ago|reply
Please don't post links through bit.ly and other URL indirection sites. It hurts the web by making it much more difficult to follow the link in the case that an intermediary goes out of business. Hacker News and your post will probably be around in a few years, but will bit.ly? (What's their business model, exactly...?)

For future reference, the unshortened version of his link is: http://www.stanford.edu/class/cs221/handouts/cs221-ps2.pdf

[+] smcl|15 years ago|reply
Quake3's Fast InvSqrt function:

http://www.beyond3d.com/content/articles/8/

It's been discussed here before, and no doubt it'll be cited numerous times in the comments of that reddit post, but it was the first time I'd ever seen such trickery. When I got into low level DSP programming I saw many more examples of clever hacks, but this is the one which sticks in my mind above all of those.

[+] cperciva|15 years ago|reply
My mind is blown by the algorithm for matching with mismatches which I present in the first chapter of my thesis. It shouldn't be, given that I discovered this algorithm -- but somehow "I take the Fourier Transfer of the FreeBSD kernel" sounds more like the punch line to a joke than the first step in an algorithm.
[+] michael_nielsen|15 years ago|reply
Speaking of incredibly cool algorithms based on Fourier transforms, the Schoenage-Strassen algorithm for multiplying integers is also amazing:

http://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strassen...

At least to me, it seems hard to imagine that multiplying two N-bit integers could be done with less than O(N^2) operations. Schoenage-Strassen lets you do it using O(N log N log log N) operations!

[+] jules|15 years ago|reply
Link?
[+] davidj|15 years ago|reply
493108359702850190027577767239076495728490777215020863208075 018409792627885097658864557802013660073286795447341128317353 678312015575359819785450548115719393458773300380099326195058 764525023820408110189885042615176579941704250889037029119015 870030479432826073821469541570330227987557681895601624030064 111516900872879838194258271674564774816684347928464580929131 531860070010043353189363193439129486044503709919800477094629 215581807111691530318762884778783541575932891093295447350881 882465495060005019006274705305381164278294267474853496525745 368151170655028190555265622135314631042100866286797114446706 366921982586158111251555650481342076867323407655054859108269 562666930662367997021048123965625180068183236539593483956753 575575324619023481064700987753027956186892925380693305204238 149969945456945774138335689906005870832181270486113368202651 590516635187402901819769393767785292872210955041292579257381 866058450150552502749947718831293104576980909153046133594190 302588132059322774443852550466779024518697062627788891979580 423065750615669834695617797879659201644051939960716981112615 195610276283233982579142332172696144374438105648552934887634 921030988702878745323313253212267863328370279250997499694887 759369159176445880327183847402359330203748885067557065879194 611341932307814854436454375113207098606390746417564121635042 388002967808558670370387509410769821183765499205204368255854 642288502429963322685369124648550007559166402472924071645072 531967449995294484347419021077296068205581309236268379879519 661997982855258871610961365617807456615924886608898164568541 721362920846656279131478466791550965154310113538586208196875 836883595577893914545393568199609880854047659073589728989834 250471289184162658789682185380879562790399786294493976054675 348212567501215170827371076462707124675321024836781594000875 05452543537
[+] RiderOfGiraffes|15 years ago|reply
Lenstra-Lenstra-Lovasz:

http://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%9...

High dimensional work is bloody hard, and this algorithm works amazingly well. I've spoken with Lenstra (one of them) and he's amazingly insightful on these things. He helped to crystalise my understanding of why high-dimensional spheres should be thought of as "spikey," rather than "round."

[+] jules|15 years ago|reply
Why are high dimensional spheres spikey?
[+] barrydahlberg|15 years ago|reply
How is generation of the Mandelbrot Set not on this list? That something so simple could produce something so indescribably beautiful...
[+] 8ren|15 years ago|reply
Agreed, but while the notation's simple, the iterative effect is hard to follow -

it's like a spinning rod, angle doubled and length squared each iteration, with a rod of fixed length and angle added to it each iteration (if the spinner is facing away, it gets shorter, but if towards, it gets longer). Iterate til it goes over 2. If it doesn't then it's in the Set (or maybe you need to iterate more...). [that's my current understanding]

It's pretty clear to me that I can't visualize what it will do over a few iterations (apart from simple cases) - and I sure can't see how it would produce self-similar (ie fractal) shapes. Agreed that that's amazing.

[+] michael_nielsen|15 years ago|reply
The quantum search algorithm: http://en.wikipedia.org/wiki/Grovers_algorithm

It lets you search a completely unstructured N-item search space using the square root (!) of N queries, not the N queries you'd think were necessary.

Also, the algorithm is so simple that once you know it's possible, and provided you're very comfortable with basic quantum mechanics, it's almost trivial to find the algorithm.

[+] abhijitr|15 years ago|reply
Ok, not quite an algorithm, but the PCP theory (http://en.wikipedia.org/wiki/PCP_theorem) is easily the most mind-blowing result in comp sci in the past 20 years.
[+] mzl|15 years ago|reply
PCPs are true magic. I once learned how to construct a PCP-proof with verification using just 7 bits, but it still feels like magic to me. The 3-bit variants are just crazy.
[+] rhettinger|15 years ago|reply
The Burrows-Wheeler transform used in bzip2 is truly mind-blowing.

Many algorithms are ordinary genius (eventually you would have come up with it because the problem space dictates the solution), but this one is extraordinary genius (mind-blowingly original and non-obvious). Every time I see it, I'm amazed that it works.

[+] cosbynator|15 years ago|reply
Welzl's algorithm for minimum enclosing disk does it for me. Pick a point, recursively see if it is in the disk without the point - if not, it is part of the definition the minimum enclosing disk (forced to be on the boundary). Randomized, simple, clean and expected O(n).

Not enough people know about it! http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.46....

[+] ardit33|15 years ago|reply
Kruskal, Prims and reverse-delete algorithms to find the Minimum Spanning Tree in a graph, are fun, not that hard, and very practical. http://en.wikipedia.org/wiki/Minimum_spanning_tree

Think about it, every time you get a google map direction/route, one of them is in play.

[+] joe_bleau|15 years ago|reply
CORDIC (http://en.wikipedia.org/wiki/Cordic), used to efficiently calculate trig functions with only very basic hardware requirements (add, sub, shift, and table lookup).
[+] hxa7241|15 years ago|reply
. . . relatedly, from Jim Blinn, from Marvin Minsky -- drawing a circle (near enough), or doing incremental rotation, using just integer add, sub, and two shifts:

   N = 128 * 3.14159
   X = 1000
   Y = 0

   MOVE(X,Y)
   FOR I = 1 TO N
      X = X - (Y >> 6)
      Y = Y + (X >> 6)
      DRAW(X,Y)
[+] elblanco|15 years ago|reply
Some of the sorting algorithms, as somebody learning them, it's almost entirely opaque how somebody could come up with those.

It seems like they must have sprung forth wholecloth to the inventor in the shower...it's almost impossible to have iteratively developed some of them because even small changes in the algorithms produce terrible results. I remember thinking over and over again, "how the hell could somebody come up with this?"

Searching in comparison looks very engineered, very studied, something that most people could come up with given need, motivation and time.

[+] jeffmiller|15 years ago|reply
BSP trees blew my mind when I discovered they were under the hood of Doom. This was back in the era of 386 and 486 PCs.
[+] dlnovell|15 years ago|reply
The Genetic Algorithm. It totally blew my mind when I read about it's successful application to evolving satellite antennas for JPL, and the automated design of an analog->digital signal converter that relies on a component that isn't actually connected to anything else.