top | item 40661346

(no title)

m4lvin | 1 year ago

The trick is that the human only needs to read and understand the Lean statement of a theorem and agree that it (with all involved definitions) indeed represents the original mathematical statement, but not the proof. Because that the proof is indeed proving the statement is what Lean checks. We do not need to trust the LLM in any way.

So would I accept a proof made by GPT or whatever? Yes. But not a (re)definition.

The analogy for programming is that if someone manages to write a function with a certain input and output types, and the compiler accepts it, then we do know that indeed someone managed to write a function of that type. Of course we have no idea about the behaviour, but statements/theorems are types, not values :-)

discuss

order

cubefox|1 year ago

The thing is, when AI systems are able to translate intuitive natural language proofs into formal Lean code, they will also be used to translate intuitive concepts into formal Lean definitions. And then we can't be sure whether those definitions actually define the concepts they are named after.