AI is like a mountain. No matter what humanity does, someone will summit just because it's there. We can control businesses to an extent, but a curious person in the basement, no one has control over that.
And good luck holding the military back.
One way or another humanity is on its way to the inevitable, and it's our evolutionary destiny because we are basically hard-wired to pursue the curiosity that is AI.
You are wrong. At least when it comes to AGI, which is the only kind of AI that really needs to be prevented from existing. Your conclusions are based on fuzzy thinking and extrapolating old and irrelevant data.
We can control businesses. And as for someone in their basement, it isn’t clear at all that one could develop AGI in their basement. AGIs discovery will probably require massive, massive amounts of compute. More than could fit in a basement or be powered by domestic grid connections without drawing attention.
Good luck holding the military back? You are talking about the US right? The military and every other government entity answers to the people either directly or indirectly. If enough ordinary people wanted it, the constitution of the United States itself could be amended into the ITunes terms of service. And since AGI is to the detriment of literally every human being on the planet, I think some basic legislation might be able to be passed.
Don’t succumb to these notions of hopelessness. They are wrong. And you even spread them around which just makes solving the problem even harder.
You talk as if AI is something a person can "make" in their garage. I don't buy this for a minute. You can argue the exact same thing about, say, nuclear weapons. But people making nukes in their garage is not a real problem because such a complex project with a lot of niche resources is not attainable in a garage. Curiosity doesn't mean anything if you need decades of mathematics, CS, statistics and neuroscience research as well as thousands of human hours of programming to achieve something.
The amount of resources and thus damage that an individual in a basement can do is rather limited.
We got treaties on nuclear and chemical weapons. It could happen for autonomous weapons and weaponized intelligent agents also. But it might require a serious case of abuse before we get there, as it did for chemical and nuclear...
I think there was once a yogi or guru-type that criticized the denuclearization effort saying why get rid of nukes if we still have the mindset that can justify using them?
I can't help but think the same is true of AI or any technology that could be used for good or bad.
We spend a lot of time trying to control the tools, but less time trying to improve culture and ethics of the operator.
The article fails to support this level of alarmism. Yes, it can be abused, but it's just a statistical modeling technique. Even linear correlation can be abused as well, vis a vis the replication crisis in the social sciences.
The danger of AI abuse is that existing large, powerful entities will put too much faith in statistical models without understanding their limitations, and will make bad decisions that harm people. Fortunately we have a filtering function -- entities that use these techniques with more nuance will slowly replace the failing ones because they are making better decisions. There will be a long transition as governments desperately try to save the dinosaurs from the consequences of their poor decision making, but if they can't make the transition, they're going to cease being relevant.
AI abuse is just statistical abuse. If a bank has a model based on ten years of real estate data that says, "hey, real estate increases in value, we should leverage into that", then come 2008 they'll find that things that their neat statistical model stated don't work any more. That's all the "dangers of abuse" are.
> You have expressed concern that corporations have ‘stolen’ talent from academia.
Nobody is stealing anything. What they don’t realize is that the free market will incentivize AI researchers to go to the private sector. It’s just simple economics.
If private industry are using inflated (in the short term) valuations and low tax rates to fund a talent war, but publicly funded educational institutions can't compete by increasing funding (through governments raising taxes or printing money), then the playing field isn't exactly level, is it?
Nobody is stealing anything. What they don't realize is that the free market will incentivize people to liberate your valuables into the private sector when gains are better than the potential losses. It's just simple economics.
Obviously facetious, but "stealing" is not some axiomatic boolean that never changes. It's judged differently by different societies depending on time, circumstances, and ideas.
"Raiding" employees was common place. Now there are non-raid agreements. If non-raid agreements became very ubiquitous it would be considered stealing.
Working employees 100 hour weeks with no overtime was commonplace. Now there are labor laws. It would be considered wage theft and exactly "stealing" if you did it today.
"Simple economics" gives you no guidelines to judge any of these things "stealing" or "not stealing".
Maybe that's a failure of the market? If the market is pushing people to do things that aren't what we want then "its the free market" isn't a defense.
Not a secret that largest military companies are actively investing into AI. Enormous opportunities and dangers. And it's not just computer vision and robotics - almost every aspect of modern warfare will be AI-extended.
> "has raised concerns about the possible risks from misuse of technology"
Isn't misuse possible in all (or, at least, almost all) technology?
Drugs can change humor, physical condition, hormones, and I think can be miused even as a long-term weapon over the population of some nation.
Nuclear. Explosives. Social medias. TV programs. Data processing with punch cards. Drones. Knifes. Herbicides. Small plastic rejects.
I understand the reasons Bengio tries to make, but it seems that it is the same problem with all technological stuff: someone, somewhere, will find a way to use it against others, and will cause minor or giant consequences.
Most of your examples are covered by rules and regulation in order to try to get the most out of the technology without too much adverse effects. The point here is that we should do the same for "AI", and that we should start thinking about this now.
Bengio is asked: What will be the next big thing in AI?
> "Deep learning, as it is now, has made huge progress in perception, but it hasn’t delivered yet on systems that can discover high-level representations — the kind of concepts we use in language. Humans are able to use those high-level concepts to generalize in powerful ways. That’s something that even babies can do, but machine learning is very bad at."
I read this as: "We have super-advanced skip-logic software that can produce specific results when provided a large enough data set, but "intelligence" as it is defined, does not exist."
AI is really just sophisticated software algorithms.
In my opinion, there is no true artificial intelligence, and it will be unlikely that we will ever create such a thing for quite some time, if at all. AI is being used as a buzzword to garner attention.
It seems much more likely that we will build a brain-computer interface before true AI, and it will prove more efficient than what we have today, which is effectively many computers churning through a super-long list of "if-then" statements.
It always seems like FUD to me when I read these articles. What is the specific concern? Can we get some examples? Sometimes people are like oh no AI is gonna take over! And I just think about how hard it is for humans to integrate two software systems and laugh. What are we afraid of it doing? If it gets out of control can’t we just you know... cut the power?
"Killer drones are a big concern....dangers of abuse, especially by authoritarian governments...AI can amplify discrimination and biases".
Bengio is specifically talking about malicious actors using AI. So saying "just unplug it" isn't really applicable, especially when it comes to governments.
Meanwhile, Google has cancelled its AI ethics board in response to an internal employee petition complaining about a mainstream conservative being included in the 8-member committee.
[+] [-] monkeynotes|7 years ago|reply
And good luck holding the military back.
One way or another humanity is on its way to the inevitable, and it's our evolutionary destiny because we are basically hard-wired to pursue the curiosity that is AI.
[+] [-] auber|7 years ago|reply
We can control businesses. And as for someone in their basement, it isn’t clear at all that one could develop AGI in their basement. AGIs discovery will probably require massive, massive amounts of compute. More than could fit in a basement or be powered by domestic grid connections without drawing attention.
Good luck holding the military back? You are talking about the US right? The military and every other government entity answers to the people either directly or indirectly. If enough ordinary people wanted it, the constitution of the United States itself could be amended into the ITunes terms of service. And since AGI is to the detriment of literally every human being on the planet, I think some basic legislation might be able to be passed.
Don’t succumb to these notions of hopelessness. They are wrong. And you even spread them around which just makes solving the problem even harder.
[+] [-] gnulinux|7 years ago|reply
[+] [-] jononor|7 years ago|reply
We got treaties on nuclear and chemical weapons. It could happen for autonomous weapons and weaponized intelligent agents also. But it might require a serious case of abuse before we get there, as it did for chemical and nuclear...
[+] [-] everdev|7 years ago|reply
I can't help but think the same is true of AI or any technology that could be used for good or bad.
We spend a lot of time trying to control the tools, but less time trying to improve culture and ethics of the operator.
[+] [-] andrewla|7 years ago|reply
The danger of AI abuse is that existing large, powerful entities will put too much faith in statistical models without understanding their limitations, and will make bad decisions that harm people. Fortunately we have a filtering function -- entities that use these techniques with more nuance will slowly replace the failing ones because they are making better decisions. There will be a long transition as governments desperately try to save the dinosaurs from the consequences of their poor decision making, but if they can't make the transition, they're going to cease being relevant.
AI abuse is just statistical abuse. If a bank has a model based on ten years of real estate data that says, "hey, real estate increases in value, we should leverage into that", then come 2008 they'll find that things that their neat statistical model stated don't work any more. That's all the "dangers of abuse" are.
[+] [-] 0x8BADF00D|7 years ago|reply
Nobody is stealing anything. What they don’t realize is that the free market will incentivize AI researchers to go to the private sector. It’s just simple economics.
[+] [-] webmaven|7 years ago|reply
[+] [-] znode|7 years ago|reply
Obviously facetious, but "stealing" is not some axiomatic boolean that never changes. It's judged differently by different societies depending on time, circumstances, and ideas.
"Raiding" employees was common place. Now there are non-raid agreements. If non-raid agreements became very ubiquitous it would be considered stealing.
Working employees 100 hour weeks with no overtime was commonplace. Now there are labor laws. It would be considered wage theft and exactly "stealing" if you did it today.
"Simple economics" gives you no guidelines to judge any of these things "stealing" or "not stealing".
[+] [-] UncleMeat|7 years ago|reply
[+] [-] carapace|7 years ago|reply
Really?
[+] [-] i_am_proteus|7 years ago|reply
[+] [-] jagger27|7 years ago|reply
[+] [-] oldjokes|7 years ago|reply
[deleted]
[+] [-] novaRom|7 years ago|reply
[+] [-] woliveirajr|7 years ago|reply
Isn't misuse possible in all (or, at least, almost all) technology?
Drugs can change humor, physical condition, hormones, and I think can be miused even as a long-term weapon over the population of some nation.
Nuclear. Explosives. Social medias. TV programs. Data processing with punch cards. Drones. Knifes. Herbicides. Small plastic rejects.
I understand the reasons Bengio tries to make, but it seems that it is the same problem with all technological stuff: someone, somewhere, will find a way to use it against others, and will cause minor or giant consequences.
[+] [-] jononor|7 years ago|reply
[+] [-] naringas|7 years ago|reply
[+] [-] novaRom|7 years ago|reply
[+] [-] alwaysanagenda|7 years ago|reply
> "Deep learning, as it is now, has made huge progress in perception, but it hasn’t delivered yet on systems that can discover high-level representations — the kind of concepts we use in language. Humans are able to use those high-level concepts to generalize in powerful ways. That’s something that even babies can do, but machine learning is very bad at."
I read this as: "We have super-advanced skip-logic software that can produce specific results when provided a large enough data set, but "intelligence" as it is defined, does not exist."
AI is really just sophisticated software algorithms.
In my opinion, there is no true artificial intelligence, and it will be unlikely that we will ever create such a thing for quite some time, if at all. AI is being used as a buzzword to garner attention.
It seems much more likely that we will build a brain-computer interface before true AI, and it will prove more efficient than what we have today, which is effectively many computers churning through a super-long list of "if-then" statements.
[+] [-] xthestreams|7 years ago|reply
[+] [-] luxuryballs|7 years ago|reply
[+] [-] superhuzza|7 years ago|reply
"Killer drones are a big concern....dangers of abuse, especially by authoritarian governments...AI can amplify discrimination and biases".
Bengio is specifically talking about malicious actors using AI. So saying "just unplug it" isn't really applicable, especially when it comes to governments.
[+] [-] auber|7 years ago|reply
[deleted]
[+] [-] aaron695|7 years ago|reply
[deleted]
[+] [-] caiob|7 years ago|reply
[deleted]
[+] [-] peterwwillis|7 years ago|reply
[+] [-] drak0n1c|7 years ago|reply
https://www.forbes.com/sites/jilliandonfro/2019/04/04/google...
[+] [-] Dobbs|7 years ago|reply
I for one and fking sick and tired of my existence being subject to debate due in part to these groups.