top | item 43234994

(no title)

aljarry | 1 year ago

LLMs (our current "AI") doesn't use logical or mathematical rules to reason, so I don't see how Gödel's theorem would have any meaning there. They are not a rule-based program that would have to abide by non-computability - they are non-exact statistical machines. Penrose even mentions that he hasn't studied them, and doesn't exactly know how they work, so I don't think there's much substance here.

discuss

order

pelario|1 year ago

Despite the appearance, they do: despite the training, neurons, transformers and all, ultimately it is a program running in a turing machine.

empath75|1 year ago

Well, if you break everything down to the lowest level of how the brain works, then so do humans. But I think there's a relevant higher level of abstraction in which it isn't -- it's probabilistic and as much intuition as anything else.

aljarry|1 year ago

But it is only a program computing numbers. The code itself has nothing to do with the reasoning capabilities of the model.

kadoban|1 year ago

Pick a model, a seed, a temperature and fix some floating-point annoyances and the output is a deterministic algorithm from the input.

Lerc|1 year ago

A lot of people look towards non-determanism to be a source for free will. It's often what underlies peoples thinking when they discount the ability of AI to be conscious. They want to believe they have free will and consider determinism to be incompatible with free will.

Events are either caused, or uncaused. Either can be causes. Caused events happen because of the cause. Uncaused events are by definition random. If you can detect any real pattern in an event you can infer that it was caused by something.

Relying on decision making by randomness over reasons does not seem to be a good basis of free will.

If we have free will it will be in spite of non-determinism, not because of it.

aljarry|1 year ago

That's true with any neural network or ML model. Pick a few points, use the same algorithm with the same hyperparameters and random seed, and you'll end up with the same result. Determinism doesn't mean that the "logic" or "reason" is an effect of the algorithm doing the computations.

layble|1 year ago

Maybe consciousness is just what lives in the floating-point annoyances

whilenot-dev|1 year ago

> LLMs (our current "AI") doesn't use logical or mathematical rules to reason.

I'm not sure I can follow... what exactly is decoding/encoding if not using logical and mathematical rules?

aljarry|1 year ago

Good point, I meant the reasoning is not encoded like a logical or mathematical rules. All the neural networks and related parts rely on e.g. matrix multiplication which works by mathematical rules, but the models won't answer your questions based on pre-recorded logical statements, like "apple is red".

northern-lights|1 year ago

If it is running on a computer/Turing machine, then it is effectively a rule-based program. There might be multiple steps and layers of abstraction until you get to the rules/axioms, but they exist. The fact they are a statistical machine, intuitively proves this, because - statistical, it needs to apply the rules of statistics, and machine - it needs to apply the rules of a computing machine.

aljarry|1 year ago

The program - yes, it is a rule-based program. But the reasoning and logic responses are not implemented explicitly as code, they are supported by the network and encoded in the weights of the model.

That's why I see it as not bounded by computability: LLM is not a logic program finding perfect solution to a problem, it's a statistical model to find next possible word.