top | item 36868019

(no title)

nvm0n1 | 2 years ago

This article segues from the failure of 80s style expert systems (in many domains) to a generic argument for x-risk and superintelligence explosions. But it doesn't address any of the obvious counter-arguments except one about the definition of intelligence, I guess because Alexander feels they're already addressed.

Once you get smart enough, you can do things that make you even smarter. AI will be one of those things. We already know that bigger blobs of compute with more training data can do more things in correlated ways - frogs are outclassed by cows, chimps, and humans; village idiots are outclassed by Einstein; GPT-2 is outclassed by GPT-4. At some point we might get a blob which is better than humans at designing chips, and then we can make even bigger blobs of compute, even faster than before.

But what about:

1. Limited to human performance by training data? OpenAI apparently aren't training GPT-5 because they think that research direction is tapped out. Their focus has been on augmenting this "superintelligence" with boring logic-based systems like calculators, Python interpreters, web browsers and 80s style expert systems like Wolfram Alpha. All this is suspiciously like what a human would need, not a superintelligence. It implies they don't think they can do another 2->3->4 style leap, probably due to lack of training data that would yield more advanced capabilities.

2. Bottlenecked by physical experimentation? Alexander casually asserts that if you trained an AI on circuit design it'd immediately do better, and that'd be used to build better AI chips, which in turn would yield a smarter AI ad infinitum. But Google already tried this and it just led to fraud claims, not better chips. And even if an LLM came up with an idea for a better chip, humans would still need to do the physical experimentation to figure out how to build them.

3. Bottlenecked by lack of imagination? LLMs can be "creative" in the artistic sense but there is a suspicious and very noticeable absence of them coming up with any genuinely interesting ideas. They can think faster than humans, and know immeasurably more, yet has anyone found even just one example of the sort of out-of-the-box thinking that would be required for AI to outsmart humans? Where are all these scientific breakthroughs the AI evangelists keep promising? GPT-4 is pretty damn smart but despite asking it many things, it never once came up with an idea I hadn't already had.

The article's examples seem to be its own undoing. There is no such thing as strength-leading-to-more-strength in some sort of recursive loop, which is why he needs such a bizarre and artificial example. Why should we believe there is for intelligence, when the history of human intelligence is a 2000 year struggle for even quite minor improvements in cognitive ability and even that is highly debatable?

discuss

order

breuleux|2 years ago

I think it's an important unanswered question whether intelligence really is the bottleneck for many of the things that require it. Is the production of better and better computer chips really bottlenecked by the intellect of the designers, or by simulation software and the back and forth between design and the physical process of prototyping and testing?

And if intelligence is not the bottleneck... well... is superintelligence actually worth as much as we think it is? Is human intellect the apex of what biological systems can do, or is it merely the point past which intelligence stops being the bottleneck and the returns of higher intelligence drop off dramatically?

nvm0n1|2 years ago

Yes, fully agree. There are a lot of unstated but unintuitive assumptions and intuitions going on in the AI risk/ethics community. It's useful to surface those.

monkeyjoe|2 years ago

I think this is a really interesting take.