top | item 43284526

(no title)

Delomomonl | 1 year ago

Besides that I don't think that the prediction thing is a bad thing, there should be an argument that depending on the architecture there can be a self discovery of rules though compression.

The compression leads to rules which could feel like understanding.

People say 'ah it's just a parrot repeating statically most common words' like this alone makes it unimpressive, which it doesn't. Not when an LLM responds to you like it does

If that basic thing talks like a human, why would be a human be something different?

Intelligence isn't that also correlated with speed of connections? At least when you do an IQ test, speed is factored in.

discuss

order

mdp2021|1 year ago

> If that basic thing talks like a human, why would be a human be something different?

Because properly intelligent humans actually think instead of being thinking simulators, as is apparent from the quality of the LLM outputs.

> parrot ... like this alone makes it unimpressive

"What could possibly go wrong".

Delomomonl|1 year ago

And you have any argument at all?

After all the output of these LLMs is often significant better than what a lot of humans are capable