"Reflections on Trusting Trust" by Ken Thompson is one of my favorites.
Most papers by Jon Bentley (e.g. A Sample of Brilliance) are also great reads.
I'm a frequent contributor to Fermat's Library, which posts an annotated paper (CS, Math and Physics mainly) every week. If you are looking for interesting papers to read, I would strongly recommend checking it out - http://fermatslibrary.com/
I would never call it my "all-time favorite" (no paper qualifies for that title in my book), but Satoshi Nakamoto's paper, "Bitcoin: A Peer-to-Peer Electronic Cash System" deserves a mention here, because it proposed the first-known solution to the double-spending problem in a masterless peer-to-peer network, with Byzantine fault tolerance (i.e., in a manner resistant to fraudulent nodes attempting to game the rules), via a clever application of proof-of-work:
Others in this thread have already mentioned papers or opinionated essays that quickly came to mind, including "Reflections on Trusting Trust" by Ken Thompson, "A Mathematical Theory of Communication" by Claude Shannon (incredibly well-written and easy-to-follow given the subject matter), and "Recursive Functions of Symbolic Expressions and Their Computation by Machine" by John McCarthy.
I would also mention "On Computable Numbers, with an Application to the Entscheidungsproblem" by Alan Turing, "On Formally Undecidable Propositions of Principia Mathematica And Related Systems" by Kurt Gödel, and "The Complexity of Theorem Proving Procedures" by Stephen Cook, but in my view these papers are 'unnecessarily' challenging or time-consuming to read, to the point that I think it's better to read textbooks (or popular works like "Gödel, Escher, and Bach" by Douglas Hofstadter) covering the same topics instead of the original papers. Still, these papers are foundational.
Finally, I think "The Mythical Man-Month" by Fred Brooks, and "Worse is Better" by Richard Gabriel merit inclusion here, given their influence.
This is by no means an exhaustive list. Many -- many -- other worthy papers will surely come to mind over the course of the day that I won't have a chance to mention here.
There are many other good recommendations elsewhere in this thread, including papers/essays I have not yet read :-)
Fyi, the original paper by Ralph Merkle where he introduces what is later called Merkle Trees is definitely worth knowing. I wouldn't say it is my fav, but it resulted in a key component in some of my favorite technologies.
Never did it for me. Always seemed totally trivial. What else could it possibly be!? It seems like a direct simplification of Einstein (... no global time ... spatially separated locations communicate with signals ... signals propagate in finite time ... proper time is an ordering along a world line ... relative time depends on communication ... there are spacelike separations where events do not have a defined temporal order ...). I guess they had first access to the fruit tree in those days.
I once spoke to Henri Gouraud after he gave a talk. He was very self-deprecating and acutely embarrassed that his name was attached to a blindingly obvious first-thing-that-comes-into-your-head shading expression-that-barely-deserves-the-name-algorithm. Sometimes that low hanging stuff gives you stomach ache.
Exactly I worked my way backwards to this paper while exploring real world distributed systems like Kafka and Zookeeper and it was exceptionally well written paper that explained the basics of building distributed systems
The first half of the paper is a spot-on critique of so many things that go wrong in the process of designing and implementing large-scale software systems. The second half, where the authors propose a solution, kind of goes off the rails a bit into impracticality... but they definitely point in a promising direction, even if nobody ever uses their concrete suggestions.
Definitely a gem. I loved their accurate comparison of the common programming paradigms, though I have to admit, that I didn't follow their idea of functional relational programming in detail.
Peter Naur, "Programming as theory building." (1985)
“…programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.”
I've been trying to get it frontpaged because, despite it's length, it's perhaps one of the most startling papers of this decade. Sadly, it seems like the HN voting gestalt hasn't decided to upvote a paper that's the CS equivalent of breaking the speed of light:
"Generic Top-down Discrimination for Sorting and Partitioning in Linear Time" ->
It is possible, with some proper insight and approaches, to sort general datastructures in linear time on modern computing hardware. The speed limit of sort is O(n) with some extra constant cost (often accrued by allocation). It works by decomposing and generalizing something akin to radix sort, leveraging a composable pass of linear discriminators to do the work.
There's a followup paper using this to make a very efficient in-memory database that one could easily generalize under something like kademelia and with care I suspect could make something like a better spark core.
I keep submitting and talking about this but no one seems to pick up on it. This paper is crazy important and every runtime environment SHOULD be scrambling to get this entire approach well-integrated into their stdlib.
I'm happy to see this excellent paper mentioned. Fritz Henglein (the author) was my thesis supervisor last year, and I worked on developing some of his ideas further.
In particular, I generalised discrimination (and added a touch of linear algebra) to devise a simple multi-way join algorithm that computes the join of any number of relations in optimal time (in a specific technical sense). Such results have been obtained recently, but only with far more complicated algorithms.
Alas, the fully general proof of optimality eludes us, so nothing has been published yet.
I have mixed feelings about this one. It certainly has inspired some interesting discussion. The paper itself is obtuse and difficult to absorb.
The methodology isn't _really_ new, but it isn't used frequently. While they showed it as competitive to commonly used sorting algorithms, situationally it can really shine and show significant performance benefits. I'm surprised they didn't show this in one of the graphs (or I missed it in my brief speed-thru).
I don't really see a future for this in standard libraries. I can totally see this methodology being used in the wild in a variety of systems where time = money or entities are looking for a small proprietary distinguishing edge.
If I understand the talk correctly, this is just a way to assign objects a lazy list of shorts, and then sort them by that list using a recursive bucket sort.
I like his system for generating these lists when there's nested data. However, it's often as simple as concatenating the lists generated by the fields of the object.
The O(n) time bound isn't surprising to me, since each byte of information in the input can lead to at most one scan through the bucket array. Actual performance is obviously a lot better.
I'd still like to see this integrated into a more mainstream language. It would be an interface where all you have to implement is a generator that concatenates the generators of the fields of your class in the desired order.
I know how to do any combination of fixed-width numbers, structs, unions, strings, but I'm not sure how to handle tree-like structures, and something with cycles... I can't even think of a regular comparator for.
I assume this isn't this simple, so what am I missing?
>It is possible, with some proper insight and approaches, to sort general datastructures in linear time on modern computing hardware. The speed limit of sort is O(n) with some extra constant cost (often accrued by allocation).
There is a well-known proof from information theory that the problem of sorting n distinct items has a worst-case time complexity of Ω(n log n) fixed-bitwidth operations: (1) Since each of the n items is distinct, the average number of bits needed to encode each item is Ω(log n). (2) In the worst case, in order to correctly sort the items, each bit of each item needs to be read. (3) Therefore, sorting the list requires reading Ω(n log n) bits.
So, I'm not sure how to understand the claim that their algorithm operates in "linear time". Are they saying O(n) operations where each operation operates on O(log n) bits?
[Edit: See below response from kraghen: The time is linear in the total size of the data, not in the number of items. I'll leave this comment up in case anyone has the same misunderstanding that I had.]
This is a nice technique in theory, but in practice it isn't so fast, mainly because of the implementation details. Radix sort is actually faster than comparison sorts if implemented well.
"Following the popularity of MapReduce, a whole ecosystem
of Apache Incubator Projects has emerged that all solve the
same problem. Famous examples include Apache Hadoop,
Apache Spark, Apache Pikachu, Apache Pig, German Spark
and Apache Hive [1]. However, these have proven to be
unusable because they require the user to write code in Java.
Another solution to distributed programming has been
proposed by Microsoft with their innovative Excel system. In
large companies, distributed execution can be achieved using
Microsoft Excel by having hundreds of people all sitting on
their own machine working with Excel spreadsheets. These
hundreds of people e combined can easily do the work of a
single database server."
PS: This thread is great, i'm bookmarking because here there are good (serious) papers.
I know Thompson's "Reflections on Trust" and Shannon's "Communication" papers are more famous but I believe BCS's "Correctness" paper has more immediate relevance to a wider population of programmers.
For example, I don't believe Ethereum's creator, Vitalik Buterin, is familiar with it because if he was, he would have realized that "code is law" is not possible and therefore he would have predicted the DAO hack and subsequent fork/reversal to undo the code.
Seriously, if you read BCS's paper and generalize its lessons learned, you will see that the DAO hack and its reversal as inevitable.
+1 to this. Von Neumann was well ahead of his time with his automata ideas. I found in interesting how both Von Neumann and Turing turned to analyse biological systems later in their careers.
Diego Ongaro's Raft paper[1]. Perhaps this only speaks to my experience as a student but having surveyed some of the other papers in the domain (paxos[2] in its many variants: generalized paxos[3], fpaxos[4], epaxos[5], qleases[6]), I'm glad the author expended the effort he did in making Raft as understandable (relatively) as it is.
A bit cliche for HN, but I really enjoyed RECURSIVE FUNCTIONS OF SYMBOLIC EXPRESSIONS AND THEIR COMPUTATION BY MACHINE (Part I) by John McCarthy[0]. It was accessible to someone whose background at the time was not CS and convinced me of the beauty of CS -- and lisp.
It might be a cliche one to pick, but I really really really enjoy Alan Turing's "Computing Machinery and Intelligence"[1]. This paper straddles the line between CS and philosophy, but I think it's an important read for anyone in either field. And a bonus is that it's very well-written and readable.
Thanks for picking that one, that means I don't have to choose between it and my joint favourite, the snappily titled "Why Heideggerian AI Failed and how Fixing it would Require making it more Heideggerian".
Where Turing's paper put us on the journey towards AI, 57 years later Dreyfus points out how the direction chosen is not only wrong, but hopelessly misguided.
[quote]
As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS and AI had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes’ claim that reasoning was calculating, Descartes' mental representations, Leibniz's idea of a "universal characteristic" - a set of primitives in which all knowledge could be expressed, -- Kant’s claim that concepts were rules, Frege's formalization of such rules, and Wittgenstein's postulation of logical atoms in his Tractatus. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program.
[endquote]
Yao's minimax principle. It's not a very exciting read or a very exciting conclusion compared to some of these other papers, but it's still interesting, and the conclusion has been practically useful to me a small handful of times.
It concerns randomized algorithms, which are algorithms that try to overcome worst case performance by randomizing their behavior, so that a malicious user can't know which input will be the worst case input this time.
The principle states that the expected cost of a randomized algorithm on a single input is no better or worse than the cost of a deterministic algorithm with random input.
Yao proves this is the case by constructing two zero sum games based around the algorithms' running times and then using game theory (specifically von Neumann's minimax theorem) to show that the two approaches are equivalent. It's a really neat approach!
The Anatomy of a Large-Scale Hypertextual Web Search Engine, by Brin and Page.
Not only for the historical value of changing the world, and for the fact that it's very interesting and readable; It has personal value to me: the first CS paper I've ever read and it inspired me and changed the course of my life, literally.
Also, it has some very amusingly naive (in hindsight) stuff in it, like: "Google does not have any optimizations such as query caching, subindices on common terms, and other common optimizations. We intend to speed up Google considerably through distribution and hardware, software, and algorithmic improvements. Our target is to be able to handle several hundred queries per second"
I was hoping this one would show up. It's a startlingly simple paper, which both made the idea easy to implement (PageRank algorithm can be implemented in around 10 lines of Python, give or take), and easy to game (viz. the rise of link farms). Recommended for anyone with the prerequisite 1 semester of linear algebra or equivalent.
[+] [-] joaobatalha|8 years ago|reply
Most papers by Jon Bentley (e.g. A Sample of Brilliance) are also great reads.
I'm a frequent contributor to Fermat's Library, which posts an annotated paper (CS, Math and Physics mainly) every week. If you are looking for interesting papers to read, I would strongly recommend checking it out - http://fermatslibrary.com/
- Reflections on Trusting Trust (Annotated Version) - http://fermatslibrary.com/s/reflections-on-trusting-trust
- A Sample of Brilliance (Annotated Version) - http://fermatslibrary.com/s/a-sample-of-brilliance
[+] [-] khaledh|8 years ago|reply
[+] [-] wrinkl3|8 years ago|reply
[+] [-] djhworld|8 years ago|reply
[+] [-] tequila_shot|8 years ago|reply
[+] [-] decentralised|8 years ago|reply
[+] [-] cs702|8 years ago|reply
https://bitcoin.org/bitcoin.pdf
Others in this thread have already mentioned papers or opinionated essays that quickly came to mind, including "Reflections on Trusting Trust" by Ken Thompson, "A Mathematical Theory of Communication" by Claude Shannon (incredibly well-written and easy-to-follow given the subject matter), and "Recursive Functions of Symbolic Expressions and Their Computation by Machine" by John McCarthy.
I would also mention "On Computable Numbers, with an Application to the Entscheidungsproblem" by Alan Turing, "On Formally Undecidable Propositions of Principia Mathematica And Related Systems" by Kurt Gödel, and "The Complexity of Theorem Proving Procedures" by Stephen Cook, but in my view these papers are 'unnecessarily' challenging or time-consuming to read, to the point that I think it's better to read textbooks (or popular works like "Gödel, Escher, and Bach" by Douglas Hofstadter) covering the same topics instead of the original papers. Still, these papers are foundational.
Finally, I think "The Mythical Man-Month" by Fred Brooks, and "Worse is Better" by Richard Gabriel merit inclusion here, given their influence.
This is by no means an exhaustive list. Many -- many -- other worthy papers will surely come to mind over the course of the day that I won't have a chance to mention here.
There are many other good recommendations elsewhere in this thread, including papers/essays I have not yet read :-)
[+] [-] the_d00d|8 years ago|reply
Fyi, the original paper by Ralph Merkle where he introduces what is later called Merkle Trees is definitely worth knowing. I wouldn't say it is my fav, but it resulted in a key component in some of my favorite technologies.
https://people.eecs.berkeley.edu/~raluca/cs261-f15/readings/...
[+] [-] oculusthrift|8 years ago|reply
[+] [-] nikhizzle|8 years ago|reply
Time, Clocks, and the Ordering of Events in a Distributed System. Leslie Lamport.
http://amturing.acm.org/p558-lamport.pdf
My first introduction to time scales as a partial ordering. Very mind opening.
[+] [-] mikhailfranco|8 years ago|reply
I once spoke to Henri Gouraud after he gave a talk. He was very self-deprecating and acutely embarrassed that his name was attached to a blindingly obvious first-thing-that-comes-into-your-head shading expression-that-barely-deserves-the-name-algorithm. Sometimes that low hanging stuff gives you stomach ache.
[+] [-] e12e|8 years ago|reply
"Designing croquet's TeaTime: a real-time, temporal environment for active object cooperation", David Reed :
http://dl.acm.org/citation.cfm?id=1094855.1094861
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] lukateake|8 years ago|reply
[+] [-] navneet4735|8 years ago|reply
[+] [-] 0xf8|8 years ago|reply
http://math.harvard.edu/~ctm/home/text/others/shannon/entrop...
[+] [-] ape4|8 years ago|reply
"If the base 2 is used the resulting units may be called binary digits, or more briefly bits, a word suggested by J. W. Tukey"
[+] [-] godelmachine|8 years ago|reply
[+] [-] thristian|8 years ago|reply
https://github.com/papers-we-love/papers-we-love/blob/master...
The first half of the paper is a spot-on critique of so many things that go wrong in the process of designing and implementing large-scale software systems. The second half, where the authors propose a solution, kind of goes off the rails a bit into impracticality... but they definitely point in a promising direction, even if nobody ever uses their concrete suggestions.
[+] [-] agentm|8 years ago|reply
https://github.com/agentm/project-m36
[+] [-] nuclx|8 years ago|reply
[+] [-] akkartik|8 years ago|reply
“…programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.”
http://pages.cs.wisc.edu/~remzi/Naur.pdf https://news.ycombinator.com/item?id=10833278 https://news.ycombinator.com/item?id=7491661
[+] [-] KirinDave|8 years ago|reply
"Generic Top-down Discrimination for Sorting and Partitioning in Linear Time" ->
http://www.diku.dk/hjemmesider/ansatte/henglein/papers/hengl...
(if you're daunted by an 80 page paper as I am, there is also a talk on it: https://www.youtube.com/watch?v=sz9ZlZIRDAg)
It is possible, with some proper insight and approaches, to sort general datastructures in linear time on modern computing hardware. The speed limit of sort is O(n) with some extra constant cost (often accrued by allocation). It works by decomposing and generalizing something akin to radix sort, leveraging a composable pass of linear discriminators to do the work.
There's a followup paper using this to make a very efficient in-memory database that one could easily generalize under something like kademelia and with care I suspect could make something like a better spark core.
http://www.diku.dk/hjemmesider/ansatte/henglein/papers/hengl...
I keep submitting and talking about this but no one seems to pick up on it. This paper is crazy important and every runtime environment SHOULD be scrambling to get this entire approach well-integrated into their stdlib.
Unsurprisingly, Kmett has already implemented it in Haskell (it generalized neatly under the dual of the applicative+alternative functor): https://hackage.haskell.org/package/discrimination
[+] [-] kraghen|8 years ago|reply
In particular, I generalised discrimination (and added a touch of linear algebra) to devise a simple multi-way join algorithm that computes the join of any number of relations in optimal time (in a specific technical sense). Such results have been obtained recently, but only with far more complicated algorithms.
Alas, the fully general proof of optimality eludes us, so nothing has been published yet.
[+] [-] learningram|8 years ago|reply
>Sadly, it seems like the HN crowd won't upvote a paper that's the CS equivalent of breaking the speed of light:
Comments like this will turn people off the rest of your post.
[+] [-] Steeeve|8 years ago|reply
The methodology isn't _really_ new, but it isn't used frequently. While they showed it as competitive to commonly used sorting algorithms, situationally it can really shine and show significant performance benefits. I'm surprised they didn't show this in one of the graphs (or I missed it in my brief speed-thru).
I don't really see a future for this in standard libraries. I can totally see this methodology being used in the wild in a variety of systems where time = money or entities are looking for a small proprietary distinguishing edge.
[+] [-] pvillano|8 years ago|reply
I like his system for generating these lists when there's nested data. However, it's often as simple as concatenating the lists generated by the fields of the object.
The O(n) time bound isn't surprising to me, since each byte of information in the input can lead to at most one scan through the bucket array. Actual performance is obviously a lot better.
I'd still like to see this integrated into a more mainstream language. It would be an interface where all you have to implement is a generator that concatenates the generators of the fields of your class in the desired order.
I know how to do any combination of fixed-width numbers, structs, unions, strings, but I'm not sure how to handle tree-like structures, and something with cycles... I can't even think of a regular comparator for.
I assume this isn't this simple, so what am I missing?
[+] [-] nokcha|8 years ago|reply
There is a well-known proof from information theory that the problem of sorting n distinct items has a worst-case time complexity of Ω(n log n) fixed-bitwidth operations: (1) Since each of the n items is distinct, the average number of bits needed to encode each item is Ω(log n). (2) In the worst case, in order to correctly sort the items, each bit of each item needs to be read. (3) Therefore, sorting the list requires reading Ω(n log n) bits.
So, I'm not sure how to understand the claim that their algorithm operates in "linear time". Are they saying O(n) operations where each operation operates on O(log n) bits?
[Edit: See below response from kraghen: The time is linear in the total size of the data, not in the number of items. I'll leave this comment up in case anyone has the same misunderstanding that I had.]
[+] [-] jules|8 years ago|reply
[+] [-] stevenschmatz|8 years ago|reply
[+] [-] csomar|8 years ago|reply
[+] [-] flavio81|8 years ago|reply
As collected by the SIGBOVIK group:
http://sigbovik.org/2017/proceedings.pdf
Quote:
"Following the popularity of MapReduce, a whole ecosystem of Apache Incubator Projects has emerged that all solve the same problem. Famous examples include Apache Hadoop, Apache Spark, Apache Pikachu, Apache Pig, German Spark and Apache Hive [1]. However, these have proven to be unusable because they require the user to write code in Java. Another solution to distributed programming has been proposed by Microsoft with their innovative Excel system. In large companies, distributed execution can be achieved using Microsoft Excel by having hundreds of people all sitting on their own machine working with Excel spreadsheets. These hundreds of people e combined can easily do the work of a single database server."
PS: This thread is great, i'm bookmarking because here there are good (serious) papers.
[+] [-] andars|8 years ago|reply
C. Shannon, "A Symbolic Analysis of Relay and Switching Circuits" (1940): https://dspace.mit.edu/bitstream/handle/1721.1/11173/3454142...
Shannon's master's thesis, which introduces boolean algebra to the field of digital circuit design.
R.W. Hamming, "Error Detecting and Error Correcting Codes" (1950): https://ia801903.us.archive.org/1/items/bstj29-2-147/bstj29-...
In Hamming's own words: "Damn it, if the machine can detect an error, why can't it locate the position of the error and correct it?"
J.T. Kajiya, "The Rendering Equation" (1986): http://cseweb.ucsd.edu/~ravir/274/15/papers/p143-kajiya.pdf
Kajiya introduces the integral rendering equation, which is the basis for most current techniques of physically based rendering.
[+] [-] jasode|8 years ago|reply
I know Thompson's "Reflections on Trust" and Shannon's "Communication" papers are more famous but I believe BCS's "Correctness" paper has more immediate relevance to a wider population of programmers.
For example, I don't believe Ethereum's creator, Vitalik Buterin, is familiar with it because if he was, he would have realized that "code is law" is not possible and therefore he would have predicted the DAO hack and subsequent fork/reversal to undo the code.
Seriously, if you read BCS's paper and generalize its lessons learned, you will see that the DAO hack and its reversal as inevitable.
[+] [-] gregors|8 years ago|reply
https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...
[+] [-] agentultra|8 years ago|reply
[0] http://cba.mit.edu/events/03.11.ASE/docs/VonNeumann.pdf
[+] [-] yaseer|8 years ago|reply
[+] [-] tksfz|8 years ago|reply
Can programming be liberated from the von Neumann style? - John Backus's Turing lecture - http://dl.acm.org/citation.cfm?id=1283933
[+] [-] irfansharif|8 years ago|reply
[1]: https://raft.github.io/raft.pdf
[2]: https://www.microsoft.com/en-us/research/wp-content/uploads/...
[3]: https://www.microsoft.com/en-us/research/wp-content/uploads/...
[4]: https://www.microsoft.com/en-us/research/wp-content/uploads/...
[5]: https://www.cs.cmu.edu/~dga/papers/epaxos-sosp2013.pdf
[6]: https://www.cs.cmu.edu/~dga/papers/leases-socc2014.pdf
[+] [-] emidln|8 years ago|reply
[0] - http://www-formal.stanford.edu/jmc/recursive.html
[+] [-] chadash|8 years ago|reply
[1] https://www.csee.umbc.edu/courses/471/papers/turing.pdf
[+] [-] AndrewOMartin|8 years ago|reply
Where Turing's paper put us on the journey towards AI, 57 years later Dreyfus points out how the direction chosen is not only wrong, but hopelessly misguided.
[quote] As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS and AI had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes’ claim that reasoning was calculating, Descartes' mental representations, Leibniz's idea of a "universal characteristic" - a set of primitives in which all knowledge could be expressed, -- Kant’s claim that concepts were rules, Frege's formalization of such rules, and Wittgenstein's postulation of logical atoms in his Tractatus. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program. [endquote]
http://leidlmair.at/doc/WhyHeideggerianAIFailed.pdf
[+] [-] icc97|8 years ago|reply
[+] [-] twoodfin|8 years ago|reply
https://www.cs.ucsb.edu/~urs/oocsb/self/papers/papers.html
[+] [-] zachsnow|8 years ago|reply
[+] [-] CobrastanJorji|8 years ago|reply
It concerns randomized algorithms, which are algorithms that try to overcome worst case performance by randomizing their behavior, so that a malicious user can't know which input will be the worst case input this time.
The principle states that the expected cost of a randomized algorithm on a single input is no better or worse than the cost of a deterministic algorithm with random input.
Yao proves this is the case by constructing two zero sum games based around the algorithms' running times and then using game theory (specifically von Neumann's minimax theorem) to show that the two approaches are equivalent. It's a really neat approach!
[+] [-] dvirsky|8 years ago|reply
Not only for the historical value of changing the world, and for the fact that it's very interesting and readable; It has personal value to me: the first CS paper I've ever read and it inspired me and changed the course of my life, literally.
Also, it has some very amusingly naive (in hindsight) stuff in it, like: "Google does not have any optimizations such as query caching, subindices on common terms, and other common optimizations. We intend to speed up Google considerably through distribution and hardware, software, and algorithmic improvements. Our target is to be able to handle several hundred queries per second"
http://infolab.stanford.edu/~backrub/google.html
[+] [-] pmiller2|8 years ago|reply
[+] [-] brad0|8 years ago|reply
https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia...
[+] [-] zzzcpan|8 years ago|reply