top | item 38389839

(no title)

wbhart | 2 years ago

Sure, but people have already applied deep learning techniques to theorem proving. There are some impressive results (which the press doesn't seem at all interested in because it doesn't have ChatGPT in the title).

It's really harder than one might imagine to develop a system which is good at higher order logic, premise selection, backtracking, algebraic manipulation, arithmetic, conjecturing, pattern recognition, visual modeling, has a good mathematical knowledge, is autonomous and fast enough to be useful.

For my money, it isn't just a matter of fitting a few existing jigsaw pieces together in some new combination. Some of the pieces don't exist yet.

discuss

order

calf|2 years ago

Then your critique is about LLMs specifically.

But even there, can we say scientifically that LLMs cannot do math? Do we actually know that? And in my mind, that would imply LLMs cannot achieve AGI either. What do we actually know about the limitations of various approaches?

And couldn't people argue that it's not even necessary to think in terms of capabilities as if they were modules or pieces? Maybe just brute-force the whole thing, make a planetary scale computer. In principle.

visarga|2 years ago

You seem knowledgeable. Can you share a couple of interesting papers for theorem proving that came out in the last year? I read a few of them as they came out, and it seemed neural nets can advance the field by mixing "soft" language with "hard" symbolic systems.