>"The vast majority of accidents are due to human error."
Obviously, as there is very little alternative to human error right now. But there's no reason to believe that to be the case when you introduce autonomous vehicles. There's a whole new variable.
> But there's no reason to believe that to be the case when you introduce autonomous vehicles.
Care to expand why do you think that?
I think there are many reasons to believe human errors will still be the majority cause of accidents after some cars are autonomous. The main, and most touted reason, is that driving is one of the types of tasks computers are much better suited to than human beings. Driving well is, mostly, just following a few clear repetitive rules over and over. We still fail at that very often, but computers excel at following clear rules repetitively.
So there are many reasons to believe computers will outperform humans at driving. And, in my opinion, by a large margin. I'm intrigued to know why you wouldn't think so.
Edit: As I read it, this is pretty straight forward, doing the following things:
- Makes it explicitly legal to operate an autonomous car on public roads, if your car has met a safety standard yet to be devised.
- Authorizes the establishment of safety standards for autonomous vehicles by the California Highway Patrol.
- Until these standards are devised, it does not prohibit autonomous cars from operating on CA public roads.
"Autonomous Cars" in this case are defined fairly narrowly: a car capable of driving "without active control and continuous monitoring of a human operator".
Prediction: at some point, there will be an accident involving an autonomous car. The event data recorder (aka the "black box") will indicate that the human operator took control of the vehicle before the accident occurred. The driver/passenger, however, will claim this was not true, and a lawsuit will commence where there will be claims that the EDR was hacked or that the car manufacturer/software provider modified the EDR to falsely blame the human in the event of an accident.
More generally, how can we evaluate an autonomous car's effectiveness in avoiding an accident, if there is always a human sitting in the driver's seat?
I think most drivers would instinctively take control of the car if they felt in danger, whether or not it's statistically in their interest.
I would really love this if it creates a minimally-invasive legal framework to enable innovation and curb nonsense like pedestrians jumping in front of cars and trying to sue the manufacturer when they get hit (hopefully while providing a mechanism where manufacturers can indemnify their vehicles without too much liability danger in such cases).
On the other hand, I have to say that I don't exactly have the greatest level of faith in the California Assembly based on past performance. Here's hoping they buck the trend and establish a framework to encourage rather than inhibit innovation.
Car companies. Google certainly wasn't the first to be working on self-driving cars, they just got a lot of time in the tech press for it. The big car companies have been working on it for years. I can't find any details but I remember hearing about some companies working on it about 8-10 years ago. More recently BMW have been working on it: http://www.motorauthority.com/news/1072117_on-the-road-with-...
Just look at the participant list for the DARPA Grand Challenge and Urban Challenge competitions; http://archive.darpa.mil/grandchallenge/teamlist.asp though you also need to look past some of the university affiliations to see the partner companies (VW for the Stanford team, for example.)
Is there any article/video (I didn't investigate) showing multiple autonomous cars interacting ? I'd be curious to see if there's any behavioral resonance leading to epic havoc.
Several teams competed, entering driverless cars which interacted with each other and some vehicles operated by real people on a closed course. Carnegie Mellon's Boss car completed all the DARPA conditions and won the grand prize.
Excellent. The sooner we get the laws into place, the sooner autonomous cars become a reality. Does anyone know if California's laws are modeled after Nevada's?
Looking at http://www.leg.state.nv.us/register/2011Register/R084-11I.pd..., it looks like Nevada decided to define a class of license (G) for operation of autonomous vehicles. The CA legislation just makes operating an autonomous car legal if it meets certain safety and performance criteria, which it doesn't define (instead, asks the CHP to come up with these rules). The way this bill is drafted, it looks to me like CA would be much less restrictive of autonomous car operation than Nevada.
Has anyone written or thought deeply about all the ways that self-driving cars could be tricked or hacked into causing accidents, kidnapping passengers, driving off cliffs, running people over, or otherwise creating havoc? Seems like it was just a couple years ago that Toyota was recalling cars over a "sudden acceleration" problem.
A system like this is only going to be as good as the data coming to the car, and given the knowledge that all cars will react a certain way to a certain stimulus, it's a lot easier to design a low-tech hack that would kill a lot of people. Here's one that comes to mind:
Given an two-lane road with a narrow shoulder and an embankment, place a small boulder on the right side of each lane. A human being will either swerve off the embankment or rip their transmission out on the rock. What's a bot going to do?
Or you could fill a truck up with diesel and fertilizer and blow it up. This technology is far more likely to reduce deaths associated with cars than increase.
Go back to the beginning of the 20th century and think how much damage could be caused by creating a nationwide electric grid. People could electrocute people at will. Personally, I'm kind of glad we went ahead with it.
Has anyone written or thought deeply about all the ways that self-driving cars could be tricked or hacked into causing accidents, kidnapping passengers, driving off cliffs, running people over, or otherwise creating havoc?
Please see Halting State, Charles Stross, aka cstross on this forum.
[+] [-] tatsuke95|14 years ago|reply
Obviously, as there is very little alternative to human error right now. But there's no reason to believe that to be the case when you introduce autonomous vehicles. There's a whole new variable.
Still, a required step for progress.
[+] [-] kennystone|14 years ago|reply
[+] [-] anigbrowl|14 years ago|reply
Manufacturing defects, eg faulty brakes.
[+] [-] vibrunazo|14 years ago|reply
Care to expand why do you think that?
I think there are many reasons to believe human errors will still be the majority cause of accidents after some cars are autonomous. The main, and most touted reason, is that driving is one of the types of tasks computers are much better suited to than human beings. Driving well is, mostly, just following a few clear repetitive rules over and over. We still fail at that very often, but computers excel at following clear rules repetitively.
So there are many reasons to believe computers will outperform humans at driving. And, in my opinion, by a large margin. I'm intrigued to know why you wouldn't think so.
[+] [-] kroo|14 years ago|reply
Edit: As I read it, this is pretty straight forward, doing the following things:
- Makes it explicitly legal to operate an autonomous car on public roads, if your car has met a safety standard yet to be devised.
- Authorizes the establishment of safety standards for autonomous vehicles by the California Highway Patrol.
- Until these standards are devised, it does not prohibit autonomous cars from operating on CA public roads.
"Autonomous Cars" in this case are defined fairly narrowly: a car capable of driving "without active control and continuous monitoring of a human operator".
[+] [-] adriand|14 years ago|reply
[+] [-] brnstz|14 years ago|reply
I think most drivers would instinctively take control of the car if they felt in danger, whether or not it's statistically in their interest.
[+] [-] blake8086|14 years ago|reply
[+] [-] VengefulCynic|14 years ago|reply
On the other hand, I have to say that I don't exactly have the greatest level of faith in the California Assembly based on past performance. Here's hoping they buck the trend and establish a framework to encourage rather than inhibit innovation.
[+] [-] DanBC|14 years ago|reply
[+] [-] drewblaisdell|14 years ago|reply
[+] [-] sheraz|14 years ago|reply
[+] [-] mwerty|14 years ago|reply
[+] [-] k-mcgrady|14 years ago|reply
[+] [-] eichin|14 years ago|reply
[+] [-] agumonkey|14 years ago|reply
[+] [-] state_machine|14 years ago|reply
Several teams competed, entering driverless cars which interacted with each other and some vehicles operated by real people on a closed course. Carnegie Mellon's Boss car completed all the DARPA conditions and won the grand prize.
[+] [-] duaneb|14 years ago|reply
[+] [-] kroo|14 years ago|reply
[+] [-] noduerme|14 years ago|reply
A system like this is only going to be as good as the data coming to the car, and given the knowledge that all cars will react a certain way to a certain stimulus, it's a lot easier to design a low-tech hack that would kill a lot of people. Here's one that comes to mind:
Given an two-lane road with a narrow shoulder and an embankment, place a small boulder on the right side of each lane. A human being will either swerve off the embankment or rip their transmission out on the rock. What's a bot going to do?
[+] [-] SoftwareMaven|14 years ago|reply
Go back to the beginning of the 20th century and think how much damage could be caused by creating a nationwide electric grid. People could electrocute people at will. Personally, I'm kind of glad we went ahead with it.
[+] [-] barry-cotter|14 years ago|reply
Please see Halting State, Charles Stross, aka cstross on this forum.