(no title)
wbhart | 2 years ago
It's really harder than one might imagine to develop a system which is good at higher order logic, premise selection, backtracking, algebraic manipulation, arithmetic, conjecturing, pattern recognition, visual modeling, has a good mathematical knowledge, is autonomous and fast enough to be useful.
For my money, it isn't just a matter of fitting a few existing jigsaw pieces together in some new combination. Some of the pieces don't exist yet.
calf|2 years ago
But even there, can we say scientifically that LLMs cannot do math? Do we actually know that? And in my mind, that would imply LLMs cannot achieve AGI either. What do we actually know about the limitations of various approaches?
And couldn't people argue that it's not even necessary to think in terms of capabilities as if they were modules or pieces? Maybe just brute-force the whole thing, make a planetary scale computer. In principle.
visarga|2 years ago
wbhart|2 years ago
The most interesting papers to me personally are the following three:
* Making higher order superposition work. https://doi.org/10.1007/978-3-030-79876-5_24
* MizAR 60 for Mizar 50. https://doi.org/10.48550/arXiv.2303.06686
* Magnus Hammer, a Transformer Based Approach to Premise Selection. https://doi.org/10.48550/arXiv.2303.04488
Your mileage may vary.