top | item 42303409

(no title)

mnk47 | 1 year ago

>Then there is the matter of actually defining general intelligence. It may also be the definition of consciousness, or at least require it. But currently, there is no mutually agreed upon definition of "general intelligence".

Here lies the problem. We should have a rule that any time we discuss AGI, we preface with the arbitrary definition that we choose to operate on. Otherwise, these discussions will inevitably devolve into people talking past each other, because everyone has a different default definition of AGI, even within the SF AI scene.

If you ask Yann LeCun, he'll say that no LLM system is even close to being generally intelligent, and that the best LLMs are still dumber than a cat.

If you ask Sam Altman, he'll say that AGI = an AI system that can perform any task as well as the average human or better.

If you ask Dario Amodei, he'll say that he doesn't like that term, mostly because by his original definition AGI is already here, since AGI = AI that is meant to do any general task, as opposed to specialized AI (e.g. AlphaGo).

discuss

order

bena|1 year ago

The definitions are one of the major sticking points.

We don't have good, clear definitions of either intelligence or consciousness.

They need to be generally agreeable. Include everything we accept as intelligent or conscious and exclude everything we accept as not intelligent or not conscious.