(no title)
dellamonica | 2 years ago
Suppose that mathematicians write papers with pen and paper in a subset of natural language without ambiguity (you wish!). What they write as proofs can be checked thoroughly for correctness by some other human (referee) otherwise it will not be published (again, we wish!).
Now for a paper with N tokens (or chars or whatever measure of length) how much computation can the referee reasonably dedicate to the task of checking that proof. Unless you believe something special about the human brain, I'd claim that poly time on N is a reasonable assumption. Now this means that checking is poly(N) for human checkable proofs of length N.
This is my argument that non deterministic Turing machines would absolutely crunch through everything that we could do in the current model of mathemticsl research.
The class of instances is that of theorems which a human could write a hand made proof and have it verified by another human. Recall that we are discussing formalization and what AI could achieve in this space, so I think formulating what we currently can achieve is a nice contrast with what could come in the future.
No comments yet.