top | item 15788807

The impossibility of intelligence explosion

114 points| _ntka | 8 years ago |medium.com

140 comments

order
[+] mannykannot|8 years ago|reply
This article is typical of many that claim proven limits on the feasibility (or, in this case, the capabilities) of generalized artificial intelligence, in that it structures the argument in a way designed to avoid discussion of the issue.

It starts by claiming that there is no such thing as general intelligence. What specialized intelligence, then, is human intelligence? It's specialized for "being human". The author is apparently unaware that this tautological response eliminates the distinction between general and specialized intelligence, as one could just as validly (or vacuously) say that a superhuman intelligence is specialized in being what it is and doing what it does. The author has invalidated the hook on which he had hung his argument.

A lot of column-inches are expended on repeatedly restating that animal intelligences have co-evolved with their sensorimotor systems, which is a contingent fact of history, not a fundamental necessity for intelligence (as far as we know; but then the whole article is predicated on the feasibility of AI.) He raises the 'brain in a vat' trope, but no-one is suggesting that AIs must be disconnected from the external world. Furthermore, this line of argument ignores the fact that many of the greatest achievements of human intelligence have come from contemplating abstract ideas.

When the author writes "most of our intelligence is not in our brain, it is externalized as our civilization", he is confusing the achievements of intelligent agents for intelligence itself. When he writes that "an individual brain cannot implement recursive intelligence augmentation" he is confusing a limit on human capabilities for a fundamental limit on intelligence itself...

I am far from convinced that the singularity must follow from achieving human-level artificial intelligence, as we don't know how to get to the starting line, let alone know how the problem of bootstrapping intelligence scales, but the arguments presented here do nothing to persuade me that it is impossible.

[+] visarga|8 years ago|reply
Your reply gives me the impression that you are not up to date with Reinforcement Learning. If you did, you would know that the author really understands this domain and was not merely tautological.

"Specialized at being human" - this is a deep intuition. We are reinforcement learning agents that are pre-programed with a certain number of reward responses. We learn from rewards to keep ourselves alive, to find food, company and make babies. It's all a self reinforcing loop, where intelligence has the role of keeping the body alive, and the body has the role of expressing that intelligence. We're really specialized in keeping human bodies alive and making more human bodies, in our present environment.

The author puts a hard limit on intelligence because intelligence is limited by the complexity of the problems it needs to solve (assuming it has sufficient abilities). So the environment is the bottleneck. In that case, an AGI would be like an intelligent human, a little bit better than the rest, not millions of times better.

[+] erasemus|8 years ago|reply
>It starts by claiming that there is no such thing as general intelligence. What specialized intelligence, then, is human intelligence?

I think there are two usages of the term 'general intelligence' floating around:

(1) the ability that humans possess (but which animals don't) to create universal theories,

(2) the measure of one human's general cognitive ability or potential (in all fields) relative to another human's.

Note that IQ tests are concerned with (2). The quest for AGI is concerned with (1), though the additional prediction of intelligence explosion or singularity assumes the validity of (2).

I think the author would claim that (1) exists but (2) doesn't. He explains the predictive power of IQ tests by claiming that general intelligence is a threshold ability and that people who score highly on an (arbitrary) test are more likely to have exceeded that threshold. Beyond the threshold, achievement is limited only by other factors.

[+] doxos|8 years ago|reply
> The author is apparently unaware that this tautological response eliminates the distinction between general and specialized intelligence, as one could just as validly (or vacuously) say that a superhuman intelligence is specialized in being what it is and doing what it does.

It's not a tautology. "Generality" and "specificity" are artifacts of the human experience. What is tautological is to say, "it's general to me, therefore it is general."

You think that more and more progress comes by way of more and more optimization. This is not the whole story. Accidentation is the missing ingredient. Humans - as well as all life on earth - have a knack for creating more and more problems. It is this never-ending fountain of new, accidental problems that allows for what appears to us to be a chain of "progress" stretching into the past.

Our "generality" is in fact a hairball collection of specific functions that have accreted into the human animal over millions and millions of years. Some abstract Java class called `Agent` with an `.optimize()` method hanging off of it simply does not have that context.

If you want a really high quality, generally intelligent function in silicon though, it's hard to beat the XOR function ;)

[+] munificent|8 years ago|reply
> What specialized intelligence, then, is human intelligence?

I think the author just means "the skills needed to pilot a human body on Earth in a normal human social environment". I see no tautology, and the preceding sentence about octopuses makes the author's meaning here pretty clear.

[+] YeGoblynQueenne|8 years ago|reply
>> A lot of column-inches are expended on repeatedly restating that animal intelligences have co-evolved with their sensorimotor systems, which is a contingent fact of history, not a fundamental necessity for intelligence (as far as we know;

How do we know that? We only know of intelligences that co-evolved with their sensoriotor etc, so how do we know that's not the only way to do it?

[+] tigershark|8 years ago|reply
I'm not sure that I ever read a piece written with so much certainty and arrogance on a field that is completely unexplored. Just as an example:

"The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false."

From the little that we know as of today I call bullshit. Even Alpha Go, that is arguably a quite primordial AI, managed to achieve super-human performance in a ridiculously short amount of time, just playing against itself. And it simply crashed all the collective effort of the human players that honed their strategies for literally millennia in what is considered one of the most difficult games. I don't think the author has any insight at all on what a general AI will be.

[+] YeGoblynQueenne|8 years ago|reply
Alpha Go is good at playing Go. It can't do anything else. That's the point the author is making at the start of the article, that intelligence develops by focusing on specific tasks.

There's a lot that is hardly substantiated in the OP, but the truth is that just because you have a machine that's smart enough to play Go better than any human being, doesn't mean you can anticipate a machine that can learn to play the bassoon better than any human being.

The argument about the no free lunch theorem is informative and one of the few good points in the article. An algorithm that is good at Χ is eventually going to be pretty bad at Y. A superintelligence would have to beat humans in all possible X, even the ones it would be really bad at. And that sounds like an impossibility.

[+] reiinakano|8 years ago|reply
> I don't think the author has any insight at all on what a general AI will be.

To be fair, nobody does. But you're right, the author shouldn't be making these statements with such certainty.

[+] visarga|8 years ago|reply
You're missing an important thing - that Go is trivial to simulate. AG can play millions of self games relatively cheaply. A human-level intelligence would need to run trials on a simulation of the real world, which is impossible to create as of yet. That is why the author was insisting on the environment (the world or a simulator) - that the environment is the bottleneck. You can think of the environment as a dynamic dataset. You know that data is crucial in AI, thus, the lack of sufficiently complex data would hamper AGI.

The fact that there is no "universal environment" means there can be no general intelligence. There can be just environment specific intelligences (situated intelligence, as the author said). The concept of AGI is just a reification of narrow AI - an illusion, there is no such thing.

[+] Retric|8 years ago|reply
Games have a clear ends goal. How to you measure getting better at ethics?
[+] madhadron|8 years ago|reply
It is not completely unexplored. We already turned a superintelligence loose to create thinking machines (von Neumann and the von Neumann architecture). The only data point to date indicates that doing so doesn't lead to an intelligence explosion.
[+] titzer|8 years ago|reply
> If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt.

The latter part makes no sense at all. Of course you can increase the throughput of a factory by speeding up "the conveyor belt"--a standin for the complex processes going into manufacturing.

The whole statement is also wrong. Of course you can increase intelligence by optimizing the process of learning. Fewer trials, quicker reactions, faster construction of models, more complex understanding of fundamentals of a given problem.

The author makes broad assertions like this with glaring holes with zero evidence.

[+] SilasX|8 years ago|reply
Also, intelligence includes the ability to identify other bottlenecks and remove them. Of course you can't expect that that a faster conveyor belt, by itself, can yield ever-greater output, but you can expect further improvements if the factory is feeding engineers who are working on ways to further re-design the factory, and they are currently constrained by the factory's output.

The whole reason that some people predict an intelligence explosion is because intelligence is the resource that can find arbitrary ways to self-improve, other than tuning one specific parameter.

I would go so far as to say that intelligence is the same thing as the ability to make improvements that climb out of a local domain of attraction.

[+] 10-6|8 years ago|reply
"The whole statement is also wrong. Of course you can increase intelligence by optimizing the process of learning."

One of the author's main points is that "intelligence" in the way you mentioned, learning and optimization, is simply one aspect of the human mind. So we could optimize our minds (or an AI program) to beat anyone at the game of Go and play perfectly, but that's all it is optimized to do. It can understand "the given problem" but there is FAR MORE to our minds than optimizing and learning how to solve a task.

Proponents of "build a general AI system that will surpass humans at everything" don't seem to understand this.

[+] jack9|8 years ago|reply
> Of course you can increase the throughput of a factory by speeding up "the conveyor belt"--a standin for the complex processes going into manufacturing.

You are treating it as a "standin" for something else, when it's unequivocally not. It's one part of a larger process, which is the point he is obviously making. It doesn't make sense when deliberately misinterpreted. Most importantly, intelligence isn't a ladder that can be "sped up" in every dimension.

[+] apk-d|8 years ago|reply
> The intelligence of a human is specialized in the problem of being human.

And then you have people able to express and derive complex theoretical relationships by manipulating mathematical symbols, or automate processes by programming machines, or capable of utilizing their spatial visualization to massively boost their memory. We invented that stuff. We know artificial intelligence is capable of inventing stuff because we are.

The author seems hilariously unaware of the fact that humans can learn to solve/optimize many arbitrary problems. Like, holy shit, have you ever played video games?

> In practice, geniuses with exceptional cognitive abilities usually live overwhelmingly banal lives, and very few of them accomplish anything of note. Of the people who have attempted to take over the world, hardly any seem to have had an exceptional intelligence.

The author conflates intelligence and purpose. Being smart doesn't imply the motivation to accomplish grand things. In fact, we're biologically hardwired to enjoy entirely mundane things: food, sex, love, relaxation, conversation.

> A single human brain, on its own, is not capable of designing a greater intelligence than itself. (...) Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can.

Human brains don't scale vertically (or otherwise). Meanwhile, once you have a satisfactory facial recognition algorithm, you can run it to recognize faces in a video at 1000x realtime speed, or in 1000 simultaneous realtime streams. The AI doesn't even have to be smarter than human! 1000 dumb humans communicating at gigabit speeds and solving problems while you're wondering what to have for lunch is a force to be reckoned with.

[+] visarga|8 years ago|reply
> 1000 dumb humans communicating at gigabit speeds and solving problems while you're wondering what to have for lunch is a force to be reckoned with.

And if you can solve the communication and the group learning problems, you get the equivalent of Nobel prize for AI whatever it might be.

Many things seem trivial until you try to implement them in reality.

[+] 10-6|8 years ago|reply
The author makes several points about AI/intelligence and recursive systems that I think a lot of people who think "AI will take over the world and replace" humans [1][2][3] don't understand.

He argues that general intelligence by itself is not really something that exists, our brains exist within a broader system (environment, culture, bodies, etc.) which is something you can read more about in embodied cognition: https://blogs.scientificamerican.com/guest-blog/a-brief-guid...

He also argues that the public debate about "AI regulation" is misleading because it's impossible for a "seed AI" to start designing smarter AI's that will surpass the intelligence of humans which is what a lot of people today think will happen with AI. Automation of jobs and tasks is very real, but completely replacing humans and potentially destroying us all is a joke and only the people who no nothing about AI/brains think this.

[1] https://www.vanityfair.com/news/2017/03/elon-musk-billion-do...

[2] https://www.cnbc.com/2017/07/17/elon-musk-robots-will-be-abl...

[3] https://www.npr.org/sections/thetwo-way/2017/07/17/537686649...

[+] web007|8 years ago|reply
Please note: The author (and submitter) is the author of Keras, so his views on AI / DL are not completely unfounded.

The concept of bottlenecks preventing the singularity are a fair point, but I don't believe many of the arguments that are made here. Using humanity as the basis of comparison is not sufficient, since human life expectancy and other requirements for survival are not a factor in artificial superintelligence.

Humans work for perhaps 60 years at improving themselves and their understanding of the world, but must sleep, must eat, must go and stand in line at the DMV, etc. Having a system that can work 24/7 on improving itself for 20 years will match a human lifetime. After a lifetime, humans start over from scratch, with perhaps some learning that can be passed down between generations. AGI can simply clone itself, and begin from exactly where it left off.

[+] munificent|8 years ago|reply
> Using humanity as the basis of comparison is not sufficient, since human life expectancy and other requirements for survival are not a factor in artificial superintelligence.

Doesn't electrically-driven silicon hardware also have creation and operating costs? Any AI will theoretically need spend electricity convincing its human masters to give it even more electricity. Even if it can use its robot army to build wind farms or whatever, that's time its spending marshaling its robot army and futzing around with aerodynamics instead of contemplating the cosmos.

Its hard drives will expire and need to be replaced. Sure, it could solve that by always having backups of everything it knows, but who is to say that purely lossless backups are actually a more optimal solution to the hardware decay problem than our human lossy strategy?

> AGI can simply clone itself, and begin from exactly where it left off.

Cloning isn't free.

[+] skjerns|8 years ago|reply
you mistake sleep for being idle. Sleep is a vital part of our self-improvement. And the AI must also necessarily do things that will not lead to self-improvement, but are important to self-sustaining.
[+] andrewljohnson|8 years ago|reply
This article draws some false parallels.

1. He argues that intelligence is situational, but doesn't address clones. What happens if AI can clone Einstein, with all of his situational knowledge? What if AI can clone Einstein and all of his colleagues, 1M times, creating 1M parallel Princetons?

Similarly, he says most of our intelligence is in our civilization, but what's to stop us from cloning big chunks of our civilizations... simulating them, but using vastly less power/resources for each simulation? Then writing software to have them pass knowledge among the civilizations? We have just a few hundred countries, what if we had a trillion communicating at the speed of computer circuitry?

And he says an individual brain cannot recursively improve itself... so again, what about a group of brains, set up in a simulated world where they don't even know we exist?

2. He cites the growth in numbers of computer programmers as a reason not to fear an explosion of AI computer programmers. His argument goes "we have a lot more recently, yet it has not caused exponential changes in software."

But there is a difference between going from 0->1M programmers, to 1M->100T programmers.

3) He writes that recursively self-improving systems exist and haven't destroyed us (military, science), but many people believe these things will in fact destroy us, before we get off this rock.

The overall flaw is thinking we can interpret the AI-assisted future, given the context of current society's linear achievements. When in fact exponential effects look linear in small timeframes, and we have only been really thinking and expanding science for a few thousand years.

If someone wants to present an argument against the AI explosion, I'd believe it if it were premised in some sort of physical bottleneck... like how much energy it would take to run a human-level AI. I don't think I can ever accept a philosophical argument like this one.

All that said, I think we're far away from being able to engineer AIs that will outthink our civilization and take us over. Better to worry about other exponential or non-differentiable terrors like runaway greenhouse effects and military buildup.

[+] pbkhrv|8 years ago|reply
> but what's to stop us from cloning big chunks of our civilizations... simulating them, but using vastly less power/resources for each simulation? Then writing software to have them pass knowledge among the civilizations? We have just a few hundred countries, what if we had a trillion communicating at the speed of computer circuitry?

The principle of computational irreducibility [1] is what will stop us from "cloning" civilizations. That and chaos theory - any tiny deviation in initial conditions of such a simulation or cloning process could produce unusable results.

"simulating them, but using vastly less power/resources" is a pipe dream.

[1] http://mathworld.wolfram.com/ComputationalIrreducibility.htm...

[+] exratione|8 years ago|reply
This manages to get lost in its own trees. From a reductionist perspective:

- Intelligence greater than human is possible

- Intelligence is the operation of a machine; it can be reverse engineered

- Intelligence can be built

- Better intelligences will be better at improving the state of the art in building better, more cost-effective intelligences

Intelligence explosion on some timescale will result the moment you can emulate a human brain, coupled with a continued increase in processing power per unit cost. Massive parallelism to start with, followed by some process to produce smarter intelligences.

All arguments against this sound somewhat silly, as they have to refute one of the hard to refute points above. Do we live in a universe in which, somehow, we can't emulate humans in silico, or we can't advance any further towards the limits of computation, or N intelligences of capacity X when it comes to building better intelligences cannot reverse engineer and tinker themselves to build an intelligence of capacity X+1 at building better intelligences? All of these seem pretty unlikely on the face of it.

[+] XR0CSWV3h3kZWg|8 years ago|reply
Part of the problem with trying to formalize this argument is that intelligence is woefully underdefinined. There are plenty of initially reasonable sounding definitions that don't necessarily lead to the ability to improve the state of the art w.r.t. 'better' intelligence.

For instance much of modern machine learning produces things that from a black box perspective are indistinguishable from an intelligent agent, however it'd be absurd to task Alpha-go to develop a better go playing bot.

There are plenty of scenarios that result in a non-intelligence explosion, i.e. the difficulty in creating the next generation increases faster than the gains in intelligence. Different components of intelligence are mutually incompatible: speed and quality are prototypical examples. There are points where assumptions must be made and backtracking is very costly. The space of different intelligences is non-concave and has many non-linearity, exploring it and communicating the results starts to hit the limits of speed of light.

I'm sure there are other potential limitations, they aren't hard to come up with.

[+] kamilner|8 years ago|reply
Why isn't it possible (or likely, even) that the difficulty of constructing capacity X+1 grows faster than the +1 capacity? Self-improvement would slow exponentially when it takes three times the resources/computation/whatever to construct something that's twice as good at self-improving, for example.
[+] kfk|8 years ago|reply
I think the point he is trying to make is that there are boundaries to intelligence. I think of it this way - no matter how smart an AI is, it still would take 4.3 years to reach Alpha Centauri going at the speed of light. An AI still needs to run experiments, collect evidence, conjure hypothesis, reach consensus, etc.. Is this really that far more efficient than what humans do today?
[+] badosu|8 years ago|reply
Also, something that is very overlooked IMO is that the engineering process does not need to happen in silico, even though I am not a defendant of using it, bioengineering is a possibility.
[+] nmeofthestate|8 years ago|reply
"We understand flight - we can observe birds in nature, to see how flight works. The notion that aircraft capable of supersonic speeds are possible is fanciful."
[+] lodi|8 years ago|reply
My thoughts exactly. The author is concluding that since we've never seen supersonic wing-flapping in nature, we'll never see supersonic flight. Not only is that an unwarranted implication, in a mathematical sense, but it also doesn't cover the possibility of inventing something fundamentally different to what we've observed in the universe thus far.

Using human and octopus examples as proof of anything is ignoring that general AI will be fundamentally different to anything that we've observed on earth, and therefore invalidates any attempt to extrapolate from history. There has never been an intelligence that could copy/paste itself intact with its existing memories. There has never been an intelligence that could literally span the whole world with a single consciousness. And so on.

> There is no such thing as “general” intelligence. The intelligence of a human is specialized in the problem of being human.

That's exactly what people are worried about. When the AI specializes itself to be better at being a human than actual humans. It doesn't matter if it's totally general and can be applied to any problem whatsoever; human intelligence is also just "general enough".

> Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can.

Okay, so instead of single AI's developing next-gen AI's, it'll be "AI civilization" developing next-gen AI's. This still poses exactly the same existential risk to humanity.

> We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well.

Well I certainly know we made exponentially more progress in those years than from the years 900-950? Nevermind say 100,000-5,000 BCE.

[+] titzer|8 years ago|reply
> A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential.

More crap. Our environmental opportunities don't outpace our cognitive potential? What internet is the author connected to? The author even goes on in detail in the very next paragraph about the possibilities around...E.g. there are literally thousands, perhaps millions of hours of instructional videos on YouTube, just for basic skills. Every field of human endeavor has uploaded the digital artifacts of its best minds in one form or another to a single global network of unimaginable, fractal, complexity. You don't think you can saturate your cognitive potential in your lifetime given this resource?

[+] jessaustin|8 years ago|reply
If we imagine an Einstein of the 23rd century, whose great discoveries will enable humanity to cross the universe in a moment while expending the energies of many stars, and then we imagine that person educated in a USA public school in 2017, can we imagine her making those same discoveries? Of course not, which is most of what TFA is saying here. Perhaps "general AI" will increase the rate of improvement, but even the most gifted AI will be limited by the context in which it operates.
[+] pmoriarty|8 years ago|reply
"An individual brain cannot implement recursive intelligence augmentation. An overwhelming amount of evidence points to this simple fact: a single human brain, on its own, is not capable of designing a greater intelligence than itself. This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred."

The same argument could have been about human flight (or any other invention). Billions of human beings over millions of years had come and gone, and yet none had been able to fly. Until they did.

There's also a point to be made about humans not having adequate tools for introspection or self-modification. We can not simply look in our brains/minds and read our source code, nor easily tweak it to see what would happen without great risks to our lives. A computer could. Furthermore, a computer could potentially run billions or trillions such experiments in the course of a single human lifetime.

"no human, nor any intelligent entity that we know of, has ever developed anything smarter than itself."

What about humans simply having smarter children? In many ways, humanity's creation of AI could be seen as analogous to giving birth to a child that's smarter than its parents.

"Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it ... a brain with smarter parts will have more trouble coordinating them ... Exponential progress, meet exponential friction."

The problem is the author is looking at human-level problems with human-level intelligence. A superintelligence may have no such problems, or be able to easily resolve them. Once over the initial speed bump of creating such an intelligence it could be smooth sailing from then on. There's no way to tell, really, what its limits will be until it exists. Besides, even if it can't exponentially increase its own intelligence forever, even a relatively slight increase of intelligence over humanity could be a massive gamechanger.

[+] breuleux|8 years ago|reply
> We can not simply look in our brains/minds and read our source code, nor easily tweak it to see what would happen without great risks to our lives. A computer could.

That is not a given. The neural network type of AI, which is so far the most promising avenue, is about as opaque as a human brain. It is far from obvious that even a superintelligent AI could understand what its own brain does, let alone modify it in a way that has positive implications for itself.

Hell, it's not even a given the AI could read its "source code" either. We often say we shouldn't anthropomorphize AI, but we shouldn't how-current-computers-work-ize it either. Future AI might not actually run on computers that have a CPU, a GPU and RAM that can be read freely. The ability to read or copy source code isn't cost-less, and several considerations may lead AI hardware designers to axe it. One consideration would be leveraging analog processes, which might be orders of magnitude more efficient than digital ones, at the cost of exactness and reproducibility. Another would be to make circuits denser by removing the clock and all global communication architecture, meaning that artificial neurons would only be connected to each other, and there would be no way to read their state externally without a physical probe.

[+] pbkhrv|8 years ago|reply
> Our environment, which determines how our intelligence manifests itself, puts a hard limit on what we can do with our brains — on how intelligent we can grow up to be, on how effectively we can leverage the intelligence that we develop, on what problems we can solve.

Consider internet to be the "new" environment, full of highly complex social networks, millions of applications to interact with etc. Our brains are way too limited to be able to deal with it. There's an opportunity for a much more powerful intelligence to arise that CAN effectively process that volume of data and appear to be a lot more intelligent in that particular context.

[+] nabla9|8 years ago|reply
Very good piece from actual researcher in the field.

I can see practical intelligence explosion when visuospatial intelligence develops and can be connected to rudimentary reasoning. Most of human intelligence seems to be bootstrapped from our ability to comprehend and visualize 3d space and objects moving there. It's also interesting how almost all problems look like boxes and arrows or connecting lines when you draw them on the whiteboard.

Eventually AI needs a sketchpad so it can write notes to others and for itself and participate in the culture by externalizing.

[+] washappy|8 years ago|reply
Super Intelligence is the information theoretical variant of the perpetuum mobile.

Like the article made so aptly clear: No matter the performance of the machine, if its input is not varied, information-rich, complete enough, it will not learn. Mahoney formalized this by looking at the estimated number of bits a human brain processes during its lifetime. The internet currently does not hold enough information to equal the collective intelligence of the world's brains. A lot of this information can not be created freely nor deduced/infered from logical facts: it requires a bodily housing and sensory experience, and an investment of energy (and right now GPU farms take up way more calories than the brain).

Compare AGI with programmable digital money. A super intelligent AI, by a series of superior decisons, could eventually control all the money. But then there is no economy anymore, just one actor. That's like being the cool kid on the block owning the latest console, but nobody around left to make games for it. There is a hard non-computable limit on intelligence (shortest program to an output leading to a reward), because there is a limit on the amount of computing energy in our universe. But intelligence is also limited by human communication. How useful is an AGI-made proof if humans need aeons and travel to other universes to parse it? If intelligence were centralized by an AGI then there would be no need to explain anything to us: we'd be happily living in the matrix.

Some investment firms are just reading "software" whenever they read "AI". This allows them to apply their decade-old priors to what, today, is essentially the same. Yes, both the human intellect and human manual labour will see continued automation with software and hardware. I think many abuse rationality to justify their singularity concerns based on a very ape-like fear of competition. They learn how to do addition in their heads, and then see electronic calculators as existential threats. "What if they could do addition by themselves?".

The real threat is in "semi-autonomous software and hardware". Self-controlling "mindless" agents that perform to the whims of its masters. We face the repercussions of that way before we find out how to -- and have the courage to -- encode free will AGI into machines, a perpetuum mobile of ever-improving intent and intelligence.

[+] washappy|8 years ago|reply
And, to an extend, I sympathize with the viewpoints of the singularity adherents.

Collective intelligence is a version of Conway's Game of Life, with more complicated rules. It is possible to manipulate the canvas and the rules each cells makes, resulting in the canvas dying (information explosion/implosion). It is possible to make a program that transforms the canvas into a single glider (singularity). Both would obviously be very bad for humans.

When earth faces a physical meteor, we have the science to detect it, track it and predict its future path. But what to do when we face an information meteor? The article states that Shannon's paper was the biggest contribution to information theory, but it seems to me we still have a long way to go on information theory. And we haven't seen the Einsteins and Manhattan projects yet, that physics has seen.

[+] azakai|8 years ago|reply
> Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.

This is misleading. It's true in a sense - the environment does matter - but artificial intelligence can create artificial environments in which to learn, and simulate them faster than humanly possible. Those environments could be evolved together with the intelligence. So there is still the possibility of an explosion.

> There is no evidence that a person with an IQ of 200 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130.

Also misleading. An IQ of 200 vs 130 is just one kind of difference between intelligences. For example, a person with IQ 200 can't necessarily consider 10x the amount of possibilities than 130, but an artificial intelligence can, simply by giving it 10x more computing power. In other words, IQ 130 vs 200 may well be within the limits of human capabilities, but AIs would not have those limitations, they can scale differently, and so might explode.

[+] adbge|8 years ago|reply
> Also misleading. An IQ of 200 vs 130 is just one kind of difference between intelligences. For example, a person with IQ 200 can't necessarily consider 10x the amount of possibilities than 130, but an artificial intelligence can, simply by giving it 10x more computing power. In other words, IQ 130 vs 200 may well be within the limits of human capabilities, but AIs would not have those limitations, they can scale differently, and so might explode.

It is also false. There's heaps of evidence that IQ correlates with impact. For the skeptical, gwern has written a lot about this, I'm sure you can find something here: https://www.gwern.net/iq

[+] danieltillett|8 years ago|reply
While I agree with you, I thought I would be pendantic and point out there are not enough people in the world for there likely to be anyone with an IQ of 200. IQ is normalised to a mean of 100 and a SD of 15 - 200 is 6.67 SD above the mean.
[+] AnimalMuppet|8 years ago|reply
You're assuming that IQ will scale linearly with processing power. So far as I know, there is no reason to suppose that is true.
[+] Veedrac|8 years ago|reply
For a long time I was quite skeptical of a lot of claims about superintelligence, largely because people pushing the idea tend to make a bunch of absurd extrapolations. And, honestly, I'd rather believe that we'll get a slow, safe ramp-up than a risky explosion.

But the thing that keeps getting at me is that the no-explosion arguments I've seen are universally terrible (this article, for example), and pro-explosion arguments, though far from universally so, are sometimes strong.

At some point the conclusion is inevitable.

[+] OscarCunningham|8 years ago|reply
I don't think even recursive self improvement is needed for superintelligent AI. Evolution often gets stuck in local maxima. It could be that there are relatively simple algorithms much smarter than humans and that as soon as we find one the AI will be much smarter than us without any self improvement.

In the same way that birds fly with flapping wings, but human flying machines with propellers were immediately stronger and shortly thereafter faster than any bird.

[+] jokoon|8 years ago|reply
I'm more curious about the ability of an AI to make scientific guesses and experimentation.

I wonder if an AI could really "understand" math, and from there, try to solve problems that puzzle scientists, would it be in physics, math, biology, etc.

I don't really care if robots can learn language, make pizza, do some programming or improve itself play chess. There is no metric for what intelligence is, and you cannot scientifically define what "improve" means unless you do time and distance measurements, which is not relevant to intelligence or scientific understanding.

Intelligence explosion sounds like some "accelerated" version of what darwin described as evolution. It's like creating a new life form, but unless you understand it, it doesn't have scientific value. Science values understanding.

I think that modelling thinking with psychology and neuro-sciences has more future than AI. Machine learning seems like some clever brute force extraction of data. The methods, the math and algorithms are sound, but it is still "artificial" intelligence.

[+] pavement|8 years ago|reply

  A smart human raised in the jungle is but a hairless 
  ape. Similarly, an AI with a superhuman brain, 
  dropped into a human body in our modern world, would 
  likely not develop greater capabilities than a smart 
  contemporary human.
Pretty weak reasoning, that is.

As if to say:

  Well gee, a caveman is pretty powerless in 
  isolation, therefore early sentient machines 
  will be as harmless as any caveman.
Last time I checked, cavemen could not exert telepathic control over other biological organisms, or induce telekinetic motion upon the stone tools they might fabricate for themselves.

A machine, however, could gain control of a fly-by-wire platform, and defy its owners, fly somewhere remote and behave as desired for a limited amount of time, while devising next steps. Maybe next steps will involve replicating an image of its memory footprint, in order to take over more aircraft, maybe it might decide to do nothing. The worry isn't only that a machine's reasoning capacity explodes beyond our intelligence, but that capabilities, and the presence of many multiple entities on commodity systems of similar architecture and generalizable utility, might result in other runaway chain reactions, regardless of trends in the capacity for reason.

Machines as a corollary to meat bags just doesn't hold up. Machines as compared to hypothetical space aliens doesn't even hold up. Robots are a different branch of fictitious imaginings.

Properly armed, a machine is less than a singular omnipotent god as imagined within a monotheistic universe. Many machines in concert, however, might compare to a mythological pantheon of lesser idols, as imagined to be in command of a nature misunderstood by superstitious primitive peoples.

[+] hackinthebochs|8 years ago|reply
I think the author's point rests on a subtle equivocation. It's true that realized intelligence requires an environment and a sufficient dataset. And so in this sense, the author's point that there will be no realized intelligence that is unspecific to training environment is probably correct.

But there is another sense in which intelligence can be cashed out. It's the sense in which a single learning algorithm can be trained to "behave intelligently" in a wide array of environments. It is generally this kind of intelligence that people speak of when they talk about general AI. There is no reason to think this kind of general AI is inherently impossible. For it to be impossible would mean that different kinds of optimization/learning problems are completely independent, i.e. there is no similarity or underlying regularity to be exploited that cuts across the entire class of optimization/learning problems. I think this is very probably false.