(no title)
pasta | 7 years ago
Let say we program a self driving car to estimate the impact effect of a collision with an object. Then we 'learn' it that a crash with a soft object will be better for the human in the car.
So we think the car is smart because it can detect soft and hard objects. But in case of an unavoidable crash it steers into a group of people instead of a parked car...
The Pentagon (DARPA) is now investing in AI that learns from previous experiences (without feeding it a dataset over and over again). I guess other companies are working on this as well. That will create scary AI because then the program will be altered all the time making it 'smart'.
b_tterc_p|7 years ago
We definitely don’t want the car deciding prematurely that it can’t possibly avoid a crash and start choosing what to hit (or even aim for). That will lead to truly dumb Ai.
1) minimize likelihood of crash 2) minimize speed of likely crashes