top | item 36336032

(no title)

rogers18445 | 2 years ago

This begs the question, does the G-factor (IQ) curve have diminishing returns and are humans meaningfully on the diminishing returns side of the curve?

It's not at all obvious that a super AI would have an intellect incomprehensible to a human the same way a human is to a chimpanzee.

Or another way to phrase it, it's not at all obvious that an AI can exist that would be incomprehensible to a very smart human, it may however reason much faster than such a human.

discuss

order

wintorez|2 years ago

A very good question, and a very hard question to answer. From a point of view of a horse, a super horse is just a faster horse. A 2000cc super-bike is incomprehensible to a horse, because it exists in a different category of speed.

I think similarly, a super AI's intelligence will be a completely different type of intelligence than what we have. For the lack of better word, it exists in a different dimension.

For example, things that are blackbox to us will be completely comprehensible to that super AI. Or it can solve problems that we might have considered impossible to solve.

logicchains|2 years ago

>I think similarly, a super AI's intelligence will be a completely different type of intelligence than what we have. For the lack of better word, it exists in a different dimension.

This is nonsense, magical thinking. It's possible to model reasoning formally; it's called "logic". A system of deductions built upon some axioms. Any logical reasoning, no matter how complex, can be expressed in such a system, and can be understood by anyone else given enough time.

dan_mctree|2 years ago

We do know that AI's have at least some advantages. For example they're more accurate and orders of magnitude faster at floating point calculations. They'll also have communication interfaces with other (sub)systems that are way more precise and fast than anything we could manage. And then there's the perfect memory.

I imagine such advantages also come with at least a lower time bound on solving many classes of problems and presumably that would be experienced as an incomprehensibly smart intelligence. I imagine it'd feel like chess computers in most areas of life in that the AIs actions would feel impossibly perfect at all turns leaving us far behind in attempts to compete

ben_w|2 years ago

The G-factor in IQ, even disregarding the discussion about how useful it is or isn't for humans, is not the only factor when considering non-humans.

An AI mind which can learn and intuit as well as an IQ 130 human (2σ, the tests become unreliable above that), but also comes with the speed difference between synapses and transistors (roughly the same as the difference between jogging and continental drift), has a chance to become expert at every subject.

Most of us have enough difficulty truly comprehending the domains of other single human experts; a human-upload with that much breadth of expertise will be incomprehensible by default even if they speak your language.

logicchains|2 years ago

> This begs the question, does the G-factor (IQ) curve have diminishing returns and are humans meaningfully on the diminishing returns side of the curve?

It definitely has "diminishing returns to acquiring resources". There are people many IQ points higher than Musk but none of them are anywhere near as wealthy as him, and it's not clear that Musk would be richer if he was smarter.

bobcostas55|2 years ago

A ~1sd difference in average g at the national level is the difference between not having stable electricity, and being an uber-rich technological wonderland. That suggests that even if the curve is bending, it's not a very hard bend.