(no title)
aljarry | 1 year ago
Not in the way that would apply problem of non-computability of Turing machine.
> Perhaps you can explain your point in a different way?
LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word. The model code does not solve a (let's say) NP problem to find solution to a puzzle, the only thing is doing is finding next best possible word through statistical models built on top of neural networks.
This is why I think Gödel's theorem doesn't apply here, as the LLM does not encode strict and correct logical or mathematical theorem, that would be incomplete.
> Related: would you claim that the physics of neurons has nothing to do with human intelligence? Certainly not.
I agree with you, though I had different angle in mind.
> You might be hinting at something else: perhaps different levels of explanation and/or prediction. These topics are covered extensively by many thinkers. > Such levels of explanation are constructs used by agents to make sense of phenomena. These explanations are not causal; they are interpretative.
Thank you, that's food for thought.
No comments yet.