top | item 40506461

(no title)

thelastquestion | 1 year ago

I don't think this will happen in the near future, but "ever"? Almost certainly unless humanity stops working on AI systems for some reason. The current approaches to AI have poor reasoning ability. This could possibly be due to a mechanistic flaw or simply be a matter of iteration, i.e., repeatedly trying things and getting feedback about why they are wrong, possibly internally. It's not clear that when we reason, our brains aren't implicitly and explicitly trying all sorts of things and discounting things that seem wrong (lots of neurochemical mechanisms are unclear here). Naive LLM inference isn't doing this, but can iterate within a system; it could be that a proof of a complex theorem requires an extreme version of that kind of iteration. Humans don't tend to spit out nontrivial proofs in one-shot either, so there's currently a significant asymmetry between the amount of human brain computation that goes into a proof of a complex theorem and silicon computation that goes into responding to a prompt.

discuss

order

No comments yet.