The ideas in that article suffer from the same lack of creativity that a lot of these philosopher written pieces also suffer from (including the main article here). Someone will program the ai, the robot, to want to survive, to want to destroy competitors. It's frankly incredibly irritating to read things like like that sci am article you give, because there's some bizarre belief that only how "things evolve" will govern them. Well I got news for all people that support this naive view - people will program, adapt and alter the ai's to act like they would desire, at least as a starting point. Whether it's some drone that kills people autonomously, or a future potential ai, someone will have influence over it's beliefs and point it in a direction that they desire.
That’s the entire point of the article. The AI itself will not develop a need to harm. Users/creators of the AI will program that into it. But AI by itself even with creators of the best intentions has real problems that affect society today. And talking about a hypothetical future has distracted the public narrative from the real problems with AI that we need to solve now.
They even address what you are saying in the article. They mention the looming danger of weaponized AI which is exactly when someone has altered the program to do harm.
NotSammyHagar|6 years ago
darsnack|6 years ago
darsnack|6 years ago