The key bit is constructing a hard, novel proof. The fact that AI doesn't (yet) do this isn't evidence that it doesn't reason, but if it did so it would be strong evidence that it does reason.
(I also take the pessimistic point of view that most humans don't reason, so YMMV.)
You can do something similar to this without giving it a problem that might be impossible.
Train the LLM on a bunch of things but avoid certain things...Things that humans already know about.
The you query the model about that thing. See if the model can come to the same conclusions humans do. You can actually do this right now with chatGPT.
scarmig|2 years ago
(I also take the pessimistic point of view that most humans don't reason, so YMMV.)
lordnacho|2 years ago
Does that mean when a computer outputs a new proof it understands?
corethree|2 years ago
Train the LLM on a bunch of things but avoid certain things...Things that humans already know about.
The you query the model about that thing. See if the model can come to the same conclusions humans do. You can actually do this right now with chatGPT.