LLMs (our current "AI") doesn't use logical or mathematical rules to reason, so I don't see how Gödel's theorem would have any meaning there. They are not a rule-based program that would have to abide by non-computability - they are non-exact statistical machines. Penrose even mentions that he hasn't studied them, and doesn't exactly know how they work, so I don't think there's much substance here.
pelario|1 year ago
empath75|1 year ago
aljarry|1 year ago
kadoban|1 year ago
Lerc|1 year ago
Events are either caused, or uncaused. Either can be causes. Caused events happen because of the cause. Uncaused events are by definition random. If you can detect any real pattern in an event you can infer that it was caused by something.
Relying on decision making by randomness over reasons does not seem to be a good basis of free will.
If we have free will it will be in spite of non-determinism, not because of it.
aljarry|1 year ago
layble|1 year ago
whilenot-dev|1 year ago
I'm not sure I can follow... what exactly is decoding/encoding if not using logical and mathematical rules?
aljarry|1 year ago
northern-lights|1 year ago
aljarry|1 year ago
That's why I see it as not bounded by computability: LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word.