Trouble is, there are completeness results. You can prove everthing that is true in all interpretations of first order logic. The statements that cannot be proven are unproveable because they are false in a non-standard interpretation of arithmetic. It is the existence of non-standard interpretations that is the big surprise, and the fact that statements are, in a sense, false that blocks proving them.
The equation whose roots are the Godel numbers of the proofs that it have no solutions has non-standard solutions. That is why there is no proof that it has no solutions. But the non-standard solution is the Godel number of, err, well what exactly? The trouble with the extra, non-standard numbers is that you cannot reach them by counting, so the proof they number is only finite in the non-standard interpretation. You cannot write it out. It doesn't count because we require that proofs be finite, and by finite we mean finite in the ambient meta-theory, which takes the standard model of the natural numbers.
It would be much better to sumarise Godel's theorem as showing that first order logic fails to tie down the nature of infinity.
Do we care about non-standard arithmetic? If memory serves, there are proofs that non-standard arithmetic is not computable. I can see why Bernie Madoff might find that convenient, but the rest of us can regard the uselessness of non-standard arithmetic as mathematically proven.
Godel's proof shows that first order logic is limited by the existence of mathematical monstrosities that we cannot use (because their arithmetic is not computable) and never encounter (because you cannot reach them by counting)
Isn't this broader than the limitations of first order logic? After Turing we know that no matter what formal system of axioms and rules of inference one choses(as long as these can be mechanically done) then there will be some true statements the system wont be able to see. (does the nth turing machine stop?, does a particular diophantine eqn have integer solutions?)
I've read the book the article ostensibly reviews, and to be honest it's not much cop. It's full of, frankly, a load of rubbish about post-modern thought and the extent to which Gödel's work might vindicate it. There are certainly interesting aspects to it, but it's neither a good biography nor a particularly good introduction to incompleteness, and is liable to merely frustrate anyone possessed of a passing acquaintance with either subject.
I rather enjoyed the book, but it is largely because I knew what I was getting into. It's a pop science book, written by a person with a philosophical background, and embellished in the Hollywood-esque fashion, pitting the good guy--Goedel, the misunderstood genius--against the bad guy--Wittgenstein and the logical positivists--in a no-holds-barred showdown of ideas that lie at the heart of all truth and mathematics!
Obviously, it's a stretch at times, and there a good deal of rigor lost, but it's a pretty juicy story. If I wanted a biography, I would read one specifically about Goedel, and not one about Incompleteness. If I wanted to learn about the theorems themselves, I would pick up a copy of his paper or one of the many readable expositions of his work. But I wanted a fun read that explained why so many philosophers and nonmathematicians were interested in this rather esoteric bit of mathematics, and I certainly got that from the book.
Nonetheless, if you want a better book by the author, I recommend The Mind-Body Problem over this one.
I read the book, and it was sometimes boring, still there's a lot of positive things makes you continue with the book. I got to learn about the works of Godel, Escher and Bach. And then about the patterns which exist in nature, or any work, where we never thought it could be a pattern.
Anyways, I'm not a mathematician or a physicist to comment on Godel's work. I think the book is worth reading, whether it contributed to computer science or not, i do not know.
Didn't he put forward two theorems? One about the undecidable statements as described in the article, but the other was much "worse": the consistency of mathematics can not be proven. That is for any axiom system with a certain minimal strength (like strong enough to derive natural numbers) it can not be proven that the axioms don't contradict each other.
Hopefully I am citing it right, as I am too lazy to look it up. But I think that is more of a ticking bomb sitting in the guts of mathematics. It could happen that one of these days the whole building of mathematics collapses because a contradiction in the axioms emerges. Apparently the common way to deal with it is to just deem it very unlikely since things went fine for centuries, and there is not much that can be done about it anyway.
Must finally read Gödel's proofs again, have been meaning to do so for ages :-/
The second theorem says that the consistency of a system cannot be proved within that system (sufficiently strong, etc).
If a contradiction is discovered in set theory, either it will be fixed by a change to the axioms, or another set of foundational axioms altogether will become the standard foundation of mathematics. Perhaps the canonical example of this is naive set theory itself - paradoxes were discovered, but set theory was interesting enough that mathematicians looked for alternate definitions that did not contain the paradoxes, and they succeeded.
There are two things to note here. First, axiomatization was a useful tool in distinguishing various types of set theory and defining the ones that did not contain the (known) paradoxes. Second, the full formal treatment of these set theories came relatively late to mathematics, as did widespread use of formal, axiomatic methods in general. Rigorous axiomatic formalisms are very useful mathematical tools, but mathematics got along without them for a long time (not even Euclid counts, by modern standards) and can do so again if necessary. It is unlikely that this will happen, not because axiomatics can be proved to always work or because mathematics cannot get along without it, but because it is too useful a tool to abandon easily.
This was really Godel's point: mathematics is not identical with formalism. They stand and fall separately. This is not to say that mathematics could never collapse for any reason, only that it would take a lot more than finding a paradox at the center of ZFC to make it happen.
To fill in some of the details: the so-called 'Second Incompleteness Theorem' states that recursively inumerable systems containing arithmetic are insufficient for proving their own consistency. It is possible to prove the consistency of arithmetic, but to do so you must use an even more complex system of axioms whose consistency can itself only be proved with an even more complex system, and so on.
So strictly speaking, the theorem doesn't rule out consistency proofs, but it does rule out proving the consistency of complex systems by simpler systems, meaning that the consistency of the consistency proofs can't be trusted any more than the consistency of the original system!
I still don't think we have grasped the importance of Gödel's incompleteness theorem. For example, indirectly it means that we can't ever solve the halting problem. Personally, I also think it applies to physics, especially the idea to find a theory of everything. I think theory of everything won't be possible simply because the incompleteness theorem states that "given any system of axioms that produces no paradoxes, there exist statements about X which are true, but which cannot be proved using the given axioms." (where X can be numbers, computer programs or ...)
This is exactly the kind of thinking that the article argues against. Remember that Goedel is speaking within the context of formal systems, so there is relatively little impact to any area outside of the small field of philosophy of mathematics. Anybody who tries to extrapolate beyond those bounds is either mistaken or misled as to what Goedel's incompleteness theorems actually imply. There is a much more thorough and eloquent explanation in the following link. [http://www.ams.org/notices/200604/fea-franzen.pdf]
Hell, even Einstein, who was close friends with Goedel and familiar with the result, was searching for a more fundamental physical theory than quantum mechanics, so clearly he did not believe it was a lost cause.
My instincts agree with you, but the rational part of my mind can't distinguish statements that are affected by Gödel's theorem from those that are not.
Does it really mean that a theory can't be found or just like with "everything else", we can only be 99.99999999% sure and never reach 100%?
Does it necessarily mean that the imcompleteness theorem extends to all systems?
The article linked somewhere else in this thread:
http://www.ams.org/notices/200604/fea-franzen.pdf
seems to say that it doesn't extend to other "systems".
I think physics would be very happy to come up with a theory of "almost everything", like set theory is for mathematics. That would be a huge step forward, and would be completely unaffected by Godel's theorem.
Also, the halting problem's insolvability is straightforward to see without knowing any of Godel's work. :)
It is easy to exhibit an uncomputable function, the busy beaver function is the standard, easy example. It is immediately obvious that a solution to the halting problem make the busy beaver problem computable: just stick to running the programs that halt and pick the largest output. The unsolvability of the halting problem is way easier than Godel.
Extending the standard model to include gravity, and thereby creating a "TOE" (Theory of Everything), is not I think going to be hurt by Godel. Because for TOE to work, all it need do is make predictions over and above the standard model, which can be verified by experiment, but also be consistent with all predictions of the standard model. If it can do this, it will stand as a TOE, at least until it disagrees with prediction, if and when that would happen. In other words, a TOE need not be "provable to be consistent in all it axioms". All it needs to do is include the Standard Model's predictions, then, make a few new ones to include gravity. Like one nice prediction would be to have the value of 10^-120 plank units for the energy density of "dark energy" arising naturally and nicely from someplace. That would be a good prediction, just as an example. If a TOE (inclusive of the standard model, plus a description of gravitons which delt with infinities), could regurgitate all standard model predictions, plus just make one prediction more (like setting the energy density of dark energy without having to appeal to the anthropic principle, mind you), then we got a good TOE. "Provable?" No. But it lives to fight another day, which is the best we can ask of a theory anyway.
That's interesting, and I do think that something empirical needs to come into play in order to escape issues with formalism without cheating (Sorry about the incredible vagueness of this sentence). However, what makes me wary of proof by prediction is that it has failed in the past and it is all too obvious why.
At the time, proponents of the Ptolemaic system were able to create a very complicated model that predicted the movements of planets reasonably well (don't know exactly who it was). So judging them by their predictions might have lead to the conclusion that the earth was indeed at the center of the universe.
Also, just try to write unit tests to "prove" that some piece of code is correct with regard to concurrency. What if a particular problem occurs so rarely that it is very unlikely to be caught in any empirical test? What if its occurrence depends on factors that are not well known or hard to replicate?
The problem with predictive models is that they sometimes break down unexpectedly. That doesn't make them useless, but it makes the distinction between logical proof and empirical proof all the more useful.
[+] [-] alan-crowe|17 years ago|reply
The equation whose roots are the Godel numbers of the proofs that it have no solutions has non-standard solutions. That is why there is no proof that it has no solutions. But the non-standard solution is the Godel number of, err, well what exactly? The trouble with the extra, non-standard numbers is that you cannot reach them by counting, so the proof they number is only finite in the non-standard interpretation. You cannot write it out. It doesn't count because we require that proofs be finite, and by finite we mean finite in the ambient meta-theory, which takes the standard model of the natural numbers.
It would be much better to sumarise Godel's theorem as showing that first order logic fails to tie down the nature of infinity.
Do we care about non-standard arithmetic? If memory serves, there are proofs that non-standard arithmetic is not computable. I can see why Bernie Madoff might find that convenient, but the rest of us can regard the uselessness of non-standard arithmetic as mathematically proven.
Godel's proof shows that first order logic is limited by the existence of mathematical monstrosities that we cannot use (because their arithmetic is not computable) and never encounter (because you cannot reach them by counting)
[+] [-] harshavr|17 years ago|reply
[+] [-] yters|17 years ago|reply
[+] [-] ionfish|17 years ago|reply
[+] [-] antiform|17 years ago|reply
Obviously, it's a stretch at times, and there a good deal of rigor lost, but it's a pretty juicy story. If I wanted a biography, I would read one specifically about Goedel, and not one about Incompleteness. If I wanted to learn about the theorems themselves, I would pick up a copy of his paper or one of the many readable expositions of his work. But I wanted a fun read that explained why so many philosophers and nonmathematicians were interested in this rather esoteric bit of mathematics, and I certainly got that from the book.
Nonetheless, if you want a better book by the author, I recommend The Mind-Body Problem over this one.
[+] [-] socratees|17 years ago|reply
[+] [-] Tichy|17 years ago|reply
Hopefully I am citing it right, as I am too lazy to look it up. But I think that is more of a ticking bomb sitting in the guts of mathematics. It could happen that one of these days the whole building of mathematics collapses because a contradiction in the axioms emerges. Apparently the common way to deal with it is to just deem it very unlikely since things went fine for centuries, and there is not much that can be done about it anyway.
Must finally read Gödel's proofs again, have been meaning to do so for ages :-/
[+] [-] arakyd|17 years ago|reply
If a contradiction is discovered in set theory, either it will be fixed by a change to the axioms, or another set of foundational axioms altogether will become the standard foundation of mathematics. Perhaps the canonical example of this is naive set theory itself - paradoxes were discovered, but set theory was interesting enough that mathematicians looked for alternate definitions that did not contain the paradoxes, and they succeeded.
There are two things to note here. First, axiomatization was a useful tool in distinguishing various types of set theory and defining the ones that did not contain the (known) paradoxes. Second, the full formal treatment of these set theories came relatively late to mathematics, as did widespread use of formal, axiomatic methods in general. Rigorous axiomatic formalisms are very useful mathematical tools, but mathematics got along without them for a long time (not even Euclid counts, by modern standards) and can do so again if necessary. It is unlikely that this will happen, not because axiomatics can be proved to always work or because mathematics cannot get along without it, but because it is too useful a tool to abandon easily.
This was really Godel's point: mathematics is not identical with formalism. They stand and fall separately. This is not to say that mathematics could never collapse for any reason, only that it would take a lot more than finding a paradox at the center of ZFC to make it happen.
[+] [-] cchooper|17 years ago|reply
So strictly speaking, the theorem doesn't rule out consistency proofs, but it does rule out proving the consistency of complex systems by simpler systems, meaning that the consistency of the consistency proofs can't be trusted any more than the consistency of the original system!
[+] [-] cousin_it|17 years ago|reply
[+] [-] amix|17 years ago|reply
[+] [-] antiform|17 years ago|reply
Hell, even Einstein, who was close friends with Goedel and familiar with the result, was searching for a more fundamental physical theory than quantum mechanics, so clearly he did not believe it was a lost cause.
[+] [-] wingo|17 years ago|reply
[+] [-] yters|17 years ago|reply
http://www-2.dc.uba.ar/profesores/becher/ns.html
Note, the article is for the lay reader, but the gist is a correct representation of Chaitin's views. For more rigor, see:
http://en.wikipedia.org/wiki/Chaitin%27s_constant
Chaitin wrote a freely available book about his views:
http://plus.maths.org/issue37/features/omega/feat.pdf
A bit more discussion here:
http://news.ycombinator.com/item?id=378658
[+] [-] unknown|17 years ago|reply
[deleted]
[+] [-] schtog|17 years ago|reply
[+] [-] miloshh|17 years ago|reply
Also, the halting problem's insolvability is straightforward to see without knowing any of Godel's work. :)
[+] [-] alan-crowe|17 years ago|reply
[+] [-] noonespecial|17 years ago|reply
[+] [-] gaius|17 years ago|reply
[+] [-] zitterbewegung|17 years ago|reply
[+] [-] Allocator2008|17 years ago|reply
[+] [-] fauigerzigerk|17 years ago|reply
At the time, proponents of the Ptolemaic system were able to create a very complicated model that predicted the movements of planets reasonably well (don't know exactly who it was). So judging them by their predictions might have lead to the conclusion that the earth was indeed at the center of the universe.
Also, just try to write unit tests to "prove" that some piece of code is correct with regard to concurrency. What if a particular problem occurs so rarely that it is very unlikely to be caught in any empirical test? What if its occurrence depends on factors that are not well known or hard to replicate?
The problem with predictive models is that they sometimes break down unexpectedly. That doesn't make them useless, but it makes the distinction between logical proof and empirical proof all the more useful.