Head spinning. At least I got that bit about LFP... mostly.
The coolest thing about this all is watching the Internet doing what it was built for. Real-time collaboration on a massive scale. The whole of the relevant scientific communities all in one virtual space clamoring at a hundred virtual whiteboards. I love living in the future.
I haven't looked at the proof in any kind of detail.
But comments by Jame S. Gates etc. raise a question in my mind. Any statical argument that a given phenomena is unlikely is based on the space of phenomena behaving very roughly "communatively" - AB and BA equally likely or at least correlated.
How do you apply that kind of reasoning to a totally arbitrary process of computation, a process which wouldn't necessarily have respect any kind of randomness? It seem like any statistical argument for P=/NP has to have a detailed and direct discussion of how you can escape this problem.
Anyway, that's my crude effort to translate the discussion into my mere math MA understanding...
Compared to this, the rigid approach adopted in Math Overflow to close any discussion on the issue seems so 19th century. See the discussion here: http://meta.mathoverflow.net/discussion/590/
Could it be that the million dollar millennium prizes are actually slowing down the resolution of important problems because they discourage crowd sourcing proofs?
I may be out of tune with many others here, but I think certain constructive processes (as in creating a many-faceted proof, outlining overarching software architecture, making the storyline of a good novel/movie) are most efficient with small groups of people. Otherwise you run into issues of "design by committee" or "too many cooks".
It is easier to crowdsource once the main outline is in place, it structures a vague problem into a set of more specific subtasks that can (more or less) easily be distributed across many people.
Like in this case: Deconstructing an existing proof can work very well with crowdsourcing, since a proof contains a relatively small set of specific claims, and each "crowd" contributor can focus on one particular issue.
But this works because the crowd now has a common focus point and task list created by the paper being published.
It's not clear to me how (or whether) the process could be reversed, that the same crowd and effort could be coordinated to cowrite an original paper with an alternative proof.
It seems possible to me, but doubtful. Even contributing work worth a footnote in the wikipedia article about solving P =? NP would validate a lifetime of research to many people.
I don't think anyone working on these kind of problems for years is aiming for the M$. That just icing on the cake. The main reason for not publishing partial results (which could be considered to be a sort of crowdsourcing, as it will lead to dozens of people reading and inspecting the results), is that someone could recognize the trail and beat you to its endpoint.
>... because they discourage crowd sourcing proofs?
Do they? It seems it would be valuable in any pursuit, as it costs the reviewers less. It certainly discourages crowdsourcing solutions, as then how do you split the winnings, but proofs? That seems to require an attempted solution first, to be checked, in which case credit is pretty easily assigned.
well, you can't crowdsource everything ... sure, crowdsourcing things that most humans can do (e.g., labeling images) works pretty well, but solving open problems in math or composing masterful works of literature, eh more doubtful. the crowds are only wise in certain respects.
It seems like a bit much to hope to understand and verify the proof without a huge investment of time and effort. The problem is exponentially compounded if you don't already do research in theoretical computer science and have the appropriate logic background.
But! All of this has put a spotlight on a theorem that I had somehow never appreciated. It is a beautiful theorem that is well within reach of a mathematically-literate computer scientist: P = FO(LFP) over finite ordered structures. If you start reading up on this, it's immediately clear that this is interesting, because FO(LFP) say nothing a priori about computational resources in the sense that P does.
Anyway, a sufficiently curious reader can look it up as theorem 4.10 in the following text:
Learn to use the DOM instead of HTML strings, and you'll almost never run into those XSS problems. You can replace all of your HTML sanitization functions with document.createTextNode(text).
[+] [-] sprout|15 years ago|reply
The coolest thing about this all is watching the Internet doing what it was built for. Real-time collaboration on a massive scale. The whole of the relevant scientific communities all in one virtual space clamoring at a hundred virtual whiteboards. I love living in the future.
[+] [-] hristov|15 years ago|reply
And that would be commenting on a paper before anyone has finished reading it?
Just kidding.
[+] [-] joe_the_user|15 years ago|reply
But comments by Jame S. Gates etc. raise a question in my mind. Any statical argument that a given phenomena is unlikely is based on the space of phenomena behaving very roughly "communatively" - AB and BA equally likely or at least correlated.
How do you apply that kind of reasoning to a totally arbitrary process of computation, a process which wouldn't necessarily have respect any kind of randomness? It seem like any statistical argument for P=/NP has to have a detailed and direct discussion of how you can escape this problem.
Anyway, that's my crude effort to translate the discussion into my mere math MA understanding...
[+] [-] Jun8|15 years ago|reply
Compared to this, the rigid approach adopted in Math Overflow to close any discussion on the issue seems so 19th century. See the discussion here: http://meta.mathoverflow.net/discussion/590/
[+] [-] amichail|15 years ago|reply
[+] [-] j-g-faustus|15 years ago|reply
It is easier to crowdsource once the main outline is in place, it structures a vague problem into a set of more specific subtasks that can (more or less) easily be distributed across many people.
Like in this case: Deconstructing an existing proof can work very well with crowdsourcing, since a proof contains a relatively small set of specific claims, and each "crowd" contributor can focus on one particular issue.
But this works because the crowd now has a common focus point and task list created by the paper being published. It's not clear to me how (or whether) the process could be reversed, that the same crowd and effort could be coordinated to cowrite an original paper with an alternative proof.
[+] [-] fizx|15 years ago|reply
[+] [-] Confusion|15 years ago|reply
[+] [-] Groxx|15 years ago|reply
Do they? It seems it would be valuable in any pursuit, as it costs the reviewers less. It certainly discourages crowdsourcing solutions, as then how do you split the winnings, but proofs? That seems to require an attempted solution first, to be checked, in which case credit is pretty easily assigned.
[+] [-] pgbovine|15 years ago|reply
[+] [-] unknown|15 years ago|reply
[deleted]
[+] [-] cdavidcash|15 years ago|reply
But! All of this has put a spotlight on a theorem that I had somehow never appreciated. It is a beautiful theorem that is well within reach of a mathematically-literate computer scientist: P = FO(LFP) over finite ordered structures. If you start reading up on this, it's immediately clear that this is interesting, because FO(LFP) say nothing a priori about computational resources in the sense that P does.
Anyway, a sufficiently curious reader can look it up as theorem 4.10 in the following text:
http://www.amazon.com/Descriptive-Complexity-Texts-Computer-...
That book is pretty light on the basics of logic. I used this book:
http://www.amazon.com/Mathematical-Logic-Undergraduate-Texts...
[+] [-] Sephr|15 years ago|reply
[+] [-] Figs|15 years ago|reply