Is Intelligence Self-Limiting?
Okay it's too soon to talk about self-programmed robots. But I think it could be interesting to think about easier self-programmed things. They may not be too far away from now.
Okay it's too soon to talk about self-programmed robots. But I think it could be interesting to think about easier self-programmed things. They may not be too far away from now.
[+] [-] robertskmiles|14 years ago|reply
Without the variable, the problem doesn't happen. The AI values collecting ore. If it has enough self-awareness to reliably modify itself, it knows that if it modifies its utility function it is liable to collect less ore, which is something it doesn't want. The action of modifying the utility function naturally rates very low on the utility function itself.
You don't want to murder people, so not only do you choose not to murder people, but if you are presented with a pill which will make you think it's good to murder people and take great joy in it, you will choose not to take that pill. No matter how enjoyable and good murder may be for you if you take the pill, your own self-knowledge and current utility function prohibit taking it.
The model of intelligence described can be thought of as self-limiting. Luckily it is not by any means the only viable model of intelligence.
[+] [-] rbanffy|14 years ago|reply
If the autonomous robot can modify its own programming, it can also modify the utility function to return MAXINT every time. In fact, being able to modify the utility function is a pre-requisite to be called intelligent.
One way to counter this is to create long and short-term utility functions so that the robot considers the long-term outcome of modifying the short-term priority.
This is, in fact, a threat mankind will have to deal with as soon as we are able to precisely interfere with our perception of the world. It's a problem already with drugs such as alcohol and tobacco - people know the long term effect of usage is shortening one's own life expectation and they still do it. And we consider ourselves intelligent life forms.
[+] [-] Androsynth|14 years ago|reply
[+] [-] harshreality|14 years ago|reply
Why would it want or not want anything, if it doesn't have a pleasure construct (which might also be called a motivation construct, since an AI might not be capable of the same subjective experience of pleasure that we are)?
I think it's a question of program design whether there's a utility function which decides whether to trigger the pleasure construct, or whether certain sensory input modules directly trigger the pleasure construct. To limit hacking potential, routing everything through a tamper-proof utility function might be better, except that it would also limit the AI's adaptability (short of recreating its own hardware to remove the tamper-proof module... which it might never do depending on the details of its motivation construct).
[+] [-] eaten_by_a_grue|14 years ago|reply
It's hard to balance the ability to create new, more useful utility functions with prevention of creating a utility function at odds with what the original entity valued.
[+] [-] jimrandomh|14 years ago|reply
[+] [-] watmough|14 years ago|reply
[+] [-] cousin_it|14 years ago|reply
To everyone who thinks intelligence might be limited in principle: there's no reason to think humans are anywhere close to the upper limit. In fact there's ample reason to think that humans are at the lowest threshold of intelligence that makes a technological civilization possible, because if we'd reached that threshold earlier in our evolution, we'd have created civilization then instead of now. There's probably plenty of room above us.
[+] [-] ihnorton|14 years ago|reply
To your second point, absolutely agreed: http://infoproc.blogspot.com/2012/03/only-he-was-fully-awake...
[+] [-] randallsquared|14 years ago|reply
While I don't disagree that humans are essentially at the lowest level of intelligence that make civilization possible (else why would it have taken hundreds of thousands of years to get started?), this claim has no bearing on the claim that the upper limit of intelligence is immediately above human genius level. You seem to be assuming that there is necessarily a wide gap between the lowest civilization-producing level and the highest practical level, and that's not at all clear. Some (weak) evidence that we're already near the top can be found in the higher incidence of mental health issues among very intelligent humans: perhaps this is a result of a limit on complexity rather than merely a feature of human brains.
[+] [-] goodside|14 years ago|reply
[+] [-] bermanoid|14 years ago|reply
I was actually surprised a bit to see that the author was somewhat familiar with Eliezer Yudkowsky's writings on the topic (he cited http://lesswrong.com/lw/wp/what_i_think_if_not_why/), because the line of thought doesn't seem to incorporate a real understanding of what he's said on the topic (which, to be fair, is a huge body of work...).
Most of EY's "Friendly AI" worries are rooted in this idea that when considering the entire universe of algorithms that could be described as intelligent or self improving, we need to be exceptionally careful not to assume that more than a negligible percentage of them share anything in common with human intelligence, because for the most part, they won't, unless they're carefully and explicitly designed to do so.
Here, the author assumes that the AI is simply trying to optimize some internal measure of happiness, with complete disregard for the meaning of that measure. This is an incredibly naive view of how deeply important and carefully constructed any optimization target would have to be in any self-improving intelligent machine; it's literally the core of the entire problem of friendly AI, and to trivialize it by assuming that such an AI would ever even consider rewriting its "happiness button" to be always-on is to miss the entire difficulty of the problem.
Hell, it's even the core of the problem of non-friendly AI, because it doesn't even require human level intelligence to realize that if you rewrote your own code so that you were always thrilled with the result, that's the easiest way to increase "utility". Any self-rewriting algorithm that's capable of real self improvement has to, by design, be able to consider the likelihood that changes to its objective function will end up with negative expected value.
None of which is to say this isn't a valid concern, by any stretch. But it's not a proof of universality; in fact, getting around this type of problem is exactly what any real AI designer must contend with. It's an issue that's very well known, and it's certainly not well-accepted as an insurmountable hurdle.
[+] [-] stcredzero|14 years ago|reply
[+] [-] tel|14 years ago|reply
[+] [-] iamgopal|14 years ago|reply
[+] [-] Androsynth|14 years ago|reply
Considering how many ways there are to modify the signals now: playing WoW, using drugs, even using alcohol to give a quick and easy boost. I'm not sure how the vast majority would be able to turn down such a machine.
[+] [-] dgallagher|14 years ago|reply
--------------------
Prediction: Humans won't be around in 1,000 years if technology progresses at current rates. Superiorly-intelligent entities seeded by our inventions will, but not humans.
Humans are this weird, version 0.01 of intelligence. We're part sentient, part beast. To assume we're the final perfect end product of ~4,000,000,000 years of evolution, having only been around for ~200,000 years is laughable. Pop culture and religion say otherwise because it feels good to think "we're special!", but we're only special relative to what's around us, and we're "extremely" tiny (http://www.phrenopolis.com/perspective/solarsystem/). We just happen to be the lucky first who got to v0.01, floating around on a grain of sand.
Humans are messy. Our brains are significantly limited. We die quickly. We sleep 1/3rd of our life. We do stupid things. We kill each other. We're tied to the Earth. If we leave Earth, we have to create and bring a mini-Earth along for the ride. That's an extremely large amount of overhead to carry. Efficient use of energy is likely one of the most important aspects of space travel. Anything which can do it even 1% better has a competitive advantage over us. This is why we sends robots to Mars and not people.
Imagine a form of intelligence which can travel through space, back its brain up, and never dies. If it blows up, restore from backup. Imagine a computer the size of the sun. INSANE! A human brain to a sun brain is like a grain of sand to Einstein's. It self-upgrades. It makes copies of itself and scatters throughout the universe. Trillions of eyes observing everywhere, networked together, in a giant universal wireless-mesh-network of intelligence, communicating with neutrinos (they go through planets, radio waves do not).
Major advances in hardware, software, and A.I. are key ingredients in this happening. What exists in 1,000 years will be derived from all this, much like humans are derived from a common ancestor. I don't expect a sun-sized computer in 1,000 years, but likely "intelligence" existing on every planet and moon in our solar system, with many headed to explore Alpha-Centauri.
Since humans likely won't want to be left out in all this, we'll probably transition our own intelligence/consciousness into this technology. We'll depreciate our bad and carry along our good. We're already doing this by augmenting our existences with smart phones and other gadgets. One day these will be built inside of us, and eventually will replace us. An upgraded, better version of us. Still intelligent, but vastly more-so.
[+] [-] whateverer|14 years ago|reply
by donnawarellp
[+] [-] Karellen|14 years ago|reply
Kind of like lactic acid production in humans. We normally don't make much of it, certainly less than the rate that we can flush it through our system. We can produce more than we can handle for short periods of time if needed, but it's not sustainable. Put us in a situation where we have to keep producing lactic acid beyond sustainable levels for more than a few minutes, and we won't last long either. That doesn't make lactic acid proof that human bodies are self-limiting. I mean, we don't keep going forever, but lactic acid is not the reason for that.
[+] [-] donnaware|14 years ago|reply
[deleted]
[+] [-] carsongross|14 years ago|reply
There are no exponentials, only sigmoid curves.
Unfortunately there is a lot of money to be made convincing people that an early stage sigmoid is actually an exponential.
[+] [-] robertskmiles|14 years ago|reply
So the fact that intelligence is limited, doesn't in any way mean that hyperintelligence isn't possible. There is a limit, but we have no reason to believe that limit is on anything like the same order of magnitude as current intelligence.
[+] [-] seiji|14 years ago|reply
[+] [-] kyberias|14 years ago|reply
[+] [-] robertskmiles|14 years ago|reply
"FOOM!" here is usually accompanied by some form of hand gesture evocative of an explosion.
http://wiki.lesswrong.com/wiki/FOOM
[+] [-] frankyh|14 years ago|reply
2. You can't get people to seriously discuss policy until HL is closer. The present discussants, e.g. Bill Joy, are just chattering.
3. People are not distinguishing HL AI from programs with human-like motivational structures. It would take a special effort, apart from the effort to reach HL intellignece to make AI systems wanting to rule the world or get angry with people or see themselves as oppressed. We shouldn't do that. "
--john mccarthy
[+] [-] dochtman|14 years ago|reply
[+] [-] biot|14 years ago|reply
It appears to be a word made up to explain a rapid growth in AI due to the fact that such an intelligence can rewrite its source code and modify its hardware, whereas humans are relatively stuck with the limitations of our wetware. A take on the onomatapoetic word BOOM I think as it represents an explosion of intelligence/capability. By contrast, MOOF would be subverting the reward mechanism that underlies the FOOM growth, thereby resulting in a whimper (MOOF) rather than an explosion (FOOM).
[+] [-] bluekeybox|14 years ago|reply
1) Acquiring a mate is an essential external motivator which is acted upon by Darwinian laws, and there is no escape from it... If we get to the stars, it will probably be because of women. Not really touched upon by the article.
2) The author poo-poos Facebook "friends" as being on par with a virtual world, but Facebook friends are anything but "virtual". In fact, some real-world friends are actually all too often nothing more than the MOOF agents described, while some (admittedly not all) Facebook friends may offer valuable advice about where to shop, for example, or which car to buy, or even engage with you into a discussion on politics or whatnot. Very real-world and relevant.
3) The definition of "intelligence" can be easily extended to exclude self-limiting types of intelligence.
There are probably many more things that could be picked away... I'll leave it at that.
[+] [-] dkrich|14 years ago|reply
I suspect that if there were some horrible catastrophe and the human race were suddenly thrown back into an archaic society without any of the technological advancements that we have at our disposal today, it would be those MOST focused on short-term gain who would be most likely to perpetuate their own, and consequently, human existence.
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] Swizec|14 years ago|reply
Humans.
A lot of what we do is driven by internal value calculations and pleasure centers, so why aren't we all simply taking drugs and avoiding all this messy "doing things" business?
Point is, if humans figured out a way of avoiding purely pressing the right buttons to enjoy themselves and actually being useful, so too will smart robots.
[+] [-] Androsynth|14 years ago|reply
[+] [-] rbarooah|14 years ago|reply
[+] [-] zerostar07|14 years ago|reply
[+] [-] ams6110|14 years ago|reply
Some of us are, but that is also self-limiting. If too many of us did it we'd start dying out.
[+] [-] hackinthebochs|14 years ago|reply
[+] [-] iamgopal|14 years ago|reply
[+] [-] zerostar07|14 years ago|reply
Actually it makes sense to consider all life as one big system. It was created by the planet itself, so, who knows we might even have to ascribe motivations to the planet. It's as if the planet (kind of like Lem's Solaris) has been brewing organisms for millions of years in order to do something with them . We might not be able to conceive these purposes with our antrhropomorphic thinking.
So, the human race is now coming to the point where it can modify and advance itself by tinkering with its own circuits. What we don't know is 1) what the planet plans to do with us and b) what are its movitivation and reward signals. It's not explained in the article how the AI knows what are the reward signals of its creator or why it would ever want to change them.
[+] [-] ryanackley|14 years ago|reply
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] indrax|14 years ago|reply
[+] [-] randome3889|14 years ago|reply