top | item 11029263

On Google's self-driving car acident rates

165 points| gwern | 10 years ago |ideas.4brad.com

181 comments

order
[+] brianmcconnell|10 years ago|reply
Licensed pilot here (learned to fly in 1987). People need to look at self-driving cars in the way pilots use an auto-pilot.

Auto-pilot is most useful in two situations. 1) long cross country legs where there is not much flying to do (just maintain heading and altitude), so A/P frees the pilot up to manage other systems, enjoy the view, etc, and alleviates fatigue, 2) flying a precision instrument approach, reducing the risk that the pilot will succumb to spatial disorientation in the setup for (usually manual) landing.

With cars, auto-drive capability will be useful in reducing accidents in two modes: 1) long duration highway driving where fatigue is a big issue, 2) intervening to prevent a distracted driver from causing an accident (rear end collision etc).

I'd be perfectly happy with a car that can drive itself in cruise mode on the interstate, but requires an alert driver on local roads (with the added bonus that if I am about to slam into something, it will brake to avoid or lessen the impact).

Something for the liability crowd to consider, self driving cars won't be able to avoid every potential mishap, but they will be able to reduce their severity. A car that can automatically brake to reduce its speed by 25% just before impact, will reduce its kinetic energy by roughly half, and the potential for injury by more still.

[+] underbluewaters|10 years ago|reply
I think it's pretty clear that the bar for safety must be much higher (10x) than human-level for these to be accepted by consumers. Auto accidents are very common, and the first time someone has an accident with a car like this everyone they know will hear about it. It will be terrifying. If these are only as safe on average as a human driver then nearly everyone is going to have at the very least a 2nd-hand negative experience.

The emotional response to these accidents is not going to be entirely irrational either. If I have a minor accident with my traditional truck, I'm going to probably have a good understanding of what went wrong and how I can prevent future collisions. With an autonomous vehicle... software upgrade? I'd rather take responsibility for my own safety if that's the case.

[+] Xixi|10 years ago|reply
Which one will learn faster: each individual driver each time they have a minor accident (or a close-call)? Or the autonomous vehicle software updated from data collected from each and every accidents involving an autonomous vehicle? The result will probably be software-updates like Tesla cars are getting today.

I suspect that if the attitude towards autonomous vehicle accidents is similar to that of aircraft accidents, they will become extremely safe very quickly.

[+] thrownaway2424|10 years ago|reply
The idea that humans learn well from accidents stands contrary to all available evidence.
[+] magicalist|10 years ago|reply
> I think it's pretty clear that the bar for safety must be much higher (10x) than human-level for these to be accepted by consumers.

Will it? Think about the recent Toyota brake pedals that didn't work or the GM faulty ignitions. Both of these caused a number of deaths, and the Toyota one was especially scary because it didn't even need accident-like events to cause a problem, your brakes would just stop working.

It'll hit the news (like Tesla battery fires vastly disproportionately reported on over other, much more common auto fires), but it's not clear at all that it will cause some massive panic or backlash.

[+] caskance|10 years ago|reply
I find it unlikely that consumers are as safety conscious as you imagine them to be. Think about how many people currently text or do other things to distract themselves while driving. They willingly choose less safety in order to be able to do something else while in transit. Pushing this behavior to its natural conclusion is what will allow self driving cars to be accepted even if they are 10% less safe than normal drivers.
[+] abalone|10 years ago|reply
What about manufacturer liability? That's the critical question here.

It can't just be 2X or even 10X safer than humans if the manufacturer is liable for the accident. That would EXPLODE the cost of these cars and bankrupt companies -- even if just a small fraction of today's fatal accidents due to driver error became Google's liability.

Consider that even single-vehicle accidents, which today might be the driver's fault, could result in a manufacturer lawsuit.

This is why I think fully autonomous without any human to intervene is a pipe dream, because the safety level required for that is pretty close to perfectly bugfree in an almost infinitely complex world of roads and conditions. Just for liability reasons.

These test cars of course have human copilots. But it remains to be seen if that really will grant manufacturers immunity for accidents. If the car is perfect 99.999% of the time, wouldn't that train the human to trust it and not pay as close attention? And then miss that 0.001% of the time when it hurts someone? Would a judge find it reasonable for a human to stay vigilant and responsible for that fatal corner case bug that cropped up after two years of perfect autonomous driving?

Liability is The. Critical. Question.

[+] DenisM|10 years ago|reply
There is no explosion. The cost of collisions today is reflected in the car insurance, so about $100/mo. If the manufacturer were to absorb this liability at current incident rates, it would make cars about 30% more expensive (assuming $300/mo loan payments). If Google reduces the (incidence * impact) of collision by 10x, the cost will come down to $10/mo, or about 3% of the total car cost.

And in return for the modest price increase you get a self-driving car!

[+] dragonwriter|10 years ago|reply
> It can't just be 2X or even 10X safer than humans if the manufacturer is liable for the accident. That would EXPLODE the cost of these cars and bankrupt companies -- even if just a small fraction of today's fatal accidents due to driver error became Google's liability.

If the owner/operator bears no liability and all liability is on the manufacturer, it transfers the liability cost onto the manufacturer, but even if it is as safe (not 2× or 10× as safe), that just means the manufacturer rolls the cost to (self-, likely) insure for that liability into the purchase price (but the purchaser doesn't need to get their own insurance, so the total cost of operation to the purchaser is unaffected.)

And, of course, if a company like Google is both the manufacturer and the owner/operator (e.g., using the vehicles in an Uber-like service tied in with Google Maps and Google Now, or using them for Google Express delivery vehicles with smaller robots onboard to deliver packages to the door, or using them for Google Street View camera vehicles, etc.), operator vs. manufacturer liability makes no difference.

[+] SomeCallMeTim|10 years ago|reply
1. Air bags sometimes kill people who would otherwise have survived in an accident, but on balance they save lives.

2. Making air bags mandatory hasn't caused manufacturers of cars or of air bags to go bankrupt.

3. If self-driving cars cause deaths, but on balance save lives, the situation is directly analogous.

4. There will be a way to make self-driving cars work. Q.E.D.

I seem to remember that laws exist to grant immunity to manufacturers of airbags, at least if they were functioning properly. A quick Google finds lots of ambulance-chasers who talk about "defective airbags," so manufacturers aren't immune to liability if they screwed up.

Similarly, you could imagine that No Fault laws could be passed for self-driving cars that were operating within reasonable parameters. A gross error on the part of a self-driving car could still leave manufacturers liable, but as history has shown, car companies are willing to write off even gross negligence claims rather than voluntarily add safety devices to cars, calculating cost-to-fix vs cost-to-pay-off-family tradeoffs. [1]

In fact, I'd go as far as to say that, any company that doesn't offer self-driving cars in the next 15 years may as well plan to shut down or be acquired. This is going to be a must-have feature. They will figure out how to make it work for manufacturers.

The liability question will be solved. They wouldn't even let car companies go bankrupt when they (arguably) deserved to; they certainly won't let them get into a situation that kills them, and as others have rightfully pointed out, the numbers aren't even that prohibitive if they needed to self-insure.

[1] http://content.time.com/time/specials/2007/article/0,28804,1...

[+] mikeash|10 years ago|reply
Are you saying this because manufacturers, being gigantic unsympathetic companies, would be sued for much greater damages than at-fault drivers get hit with today? Otherwise it makes no sense, TCO will come out the same whether drivers or manufacturers are liable.
[+] ergothus|10 years ago|reply
Or we could learn to stop suing people over everything. I mean, gross neglect is one thing, but if it's a safer driver than I am, and it screws up, we call that an accident.

...

Nah, I'm dreaming, it's far more likely that insurance companies will set up a wide insurance program ala taxi services than it is that we'll learn to stop being litigious.

[+] bertil|10 years ago|reply
You are assuming that, if liability in the US was a critical question, Google would not launch its project in a country with most hospitable weather and more compatible liability rules and from there, lobby for more reasonable principles in the US.
[+] danieltillett|10 years ago|reply
In theory this just shifts the insurance coverage from the driver to the manufacturer. The real unknown is if any insurance company will cover the manufacturer or not. I suspect that like the airline industry there will need to be a legislated cap on damages to make this work. Provided that there is insurance then there should not be any issue.

More fundamentally we should not assume that the people working on this at Google are idiots. They already know all of this and I am sure they are working with insurance companies and governments to solve this problem. Compared to the technical difficulty of making a reliable and safe autonomous vehicle this is an easy problem.

[+] avar|10 years ago|reply
Liability of all things is "The. Critical. Question." when it comes to autonomous vehicles?

That's such an amazingly American thing to say. Do you really think that the uptake of autonomous vehicles is going to be hindered to any significant degree by what are largely U.S.-specific legal issues?

Can you imagine a world in 2060 where the U.S. decides to leave something like 10-20% of GDP on the table because of the liability aspects of their legal environment? It might hinder uptake a bit but I don't see it stopping it in the long run.

In any case regardless of what the U.S. does at that time the rest of the world is going to be driving autonomous vehicles.

[+] Animats|10 years ago|reply
Volvo's CEO: "We are the suppliers of this technology and we are liable for everything the car is doing in autonomous mode. If you are not ready to make such a statement, you shouldn't try to develop an autonomous system."

So there.

At first, self-driving cars will probably be leased on operating leases, with maintenance and insurance bundled into the payments. Or maybe you own the vehicle, but there's a maintenance and insurance contract required to enable self-driving.

[+] wsetchell|10 years ago|reply
I don't think liability is that key here.

cost of damages = # of accidents * cost per accident

- Self driving cars should reduce the number of accidents

- Cost per accident seems independent of the kind of car you drive. If a human driven taxi hits me, I can sue the taxi company. If a self driven taxi hits me, I can sue Google. I should be able to sue for the same amount in either case.

[+] fweespeech|10 years ago|reply
> It can't just be 2X or even 10X safer than humans if the manufacturer is liable for the accident. That would EXPLODE the cost of these cars and bankrupt companies -- even if just a small fraction of today's fatal accidents due to driver error became Google's liability.

Right now, the insurance company is liable for the accident followed by the consumer. It'll just shift the insurance market towards manufacturers buying it [or likely self-insuring] and the cost of insuring a car will be baked into the price instead of bought aftermarket.

[+] petra|10 years ago|reply
If it's only 10X better than humans ,and considering that professional drivers are about 10x better than the average, maybe we're better of with something like shared UBER ?

Or maybe just add self driving for the highway , especially if it's possible for the driver to get off the vehicle before entering the highway , and hop on another car entering the city ?

[+] absherwin|10 years ago|reply
Minor crashes are even more frequent than the article estimates. Thus, the real human accident rate is even higher. Probably between 1 in every 24000 and 87000 miles.

The VTI driving study[1] equipped 100 cars with sensors and was therefore able to measure all crashes experienced. It directly measured 1 crash per 24000 miles. If we extrapolate based on the 17.4% police report rate, that suggests 1 per 87000 miles.

[1]http://www.nhtsa.gov/DOT/NHTSA/NRD/Multimedia/PDFs/Crash%20A...

[+] aetherson|10 years ago|reply
That's an interesting study. I'd be cautious at trying to create a one-line conclusion from it. It is, however, a fascinating full read, and not very long and not very technical, so I'd encourage people to read the whole thing.

Note the narrow demographic data and geographic data.

[+] huangc10|10 years ago|reply
I live in Mountain View and I see Google self-driving cars around all the time (day and night). My personal feel is that they look "different" and my attention will deliberately focus on them (which may or may not affect my driving when I'm around them).

Why not test with cars that look "normal" (think early 2000s Corolla). Wouldn't this further decrease the chance of possible accidents?

I guess what I'm saying is, who's decision was it to make it look like a toy car and not just an actual regular car that doesn't divert my attention off the road?

*edit for better readability

[+] jimrandomh|10 years ago|reply
They want to get people used to the idea of sharing the road with self-driving cars. People will be a lot more comfortable with them and have a lot fewer misconceptions if they've seen and recognized enough of them for it to stop seeming noteworthy.
[+] hueving|10 years ago|reply
Marketing. You want people to notice and think, "Hey, that's one of those self-driving cars, that's cool!"
[+] grillvogel|10 years ago|reply
google is only capable of designing things that look they were made by and for 5 year olds
[+] DanFeldman|10 years ago|reply
The CA dmv also publicly reports all accidents involving autonomous vehicles [1]. Most accidents are caused by drivers taking over control manually, or by other actors on the road driving erratically (or rear-ending the self-driving vehicles). The entries on the page report Google, Delphi, and Cruise automation as having incidents, though they're all minor damages, if there are any at all.

[1](https://www.dmv.ca.gov/portal/dmv/detail/vr/autonomous/auton...)

[+] bduerst|10 years ago|reply
>How does that number compare with humans? Well, regular people in the USA have about 6 million accidents per year reported to the police, which means about once every 500,000 miles.

Aren't accidents only reported to police if there is a hit & run, or some other criminal activity?

Seems like the wrong metric for comparison, given the way they define the self-driving "accidents" and that the majority of human fender benders are not reported.

[+] dsp1234|10 years ago|reply
In some(most/all?) states, all accidents over a certain dollar amount or that caused injury to any party are required to be reported to the police.

But the lack of police reports is why the article mentions that insurance companies believe the accident rate is higher than the 'official' number.

Anecdotally, the number of times that someone has hit my car, and then asked me not to report it in exchange for cash is non-zero.

[+] cheald|10 years ago|reply
Not necessarily. Major accidents will have the police involved, small fender benders might be handled with just an exchange of insurance information. If there's substantial damage though, the police will probably get involved since a police report makes collecting on insurance easier.
[+] HCIdivision17|10 years ago|reply
It's not uncommon to contact the police so that a quick statement can be made and the paperwork logged. It seems to help with the insurance stuff and getting a third-party's eye on the scene.
[+] aresant|10 years ago|reply
When it comes down to differentiation what do I care about if a robot is driving me around?

Safety. Period.

The equation to get to safety is sensors + computing + map / map geometry.

The sensor differentiation will matter for a while, some OEMs will do very well, but ultimately it will just boil down to the software.

So how is the future of this space not just "everybody licenses Google's platform"?

If life/death safety is the differentiator (and measurable) I feel like the writing is on the wall. Google is going to dominate this.

Maybe Musk sneaks in there with his brash approach of using his drivers as test subjects but his position as a competing auto manufacturer seems less compelling than supporting an "Android-like" effort from Google.

[+] chad_strategic|10 years ago|reply
Next time you pull up to a stop light, look to the left and then right. I'm willing to be the drivers on the left and or right are looking at their phone.

It's getting worse... the phone, blue tooth, the dashboard computer, traffic, impatient drivers. etc...

I for one can't wait these self driving cars, they have got to better than distracted drivers we have now.

[+] Thirdegree|10 years ago|reply
>Next time you pull up to a stop light, look to the left and then right. I'm willing to be the drivers on the left and or right are looking at their phone.

And the dude between them is busy staring at his neighboring drivers rather than the road!

[+] ams6110|10 years ago|reply
I don't mind someone looking at their phone while they're waiting for a red light. I'm more worried about the ones who do it while in motion.
[+] tjl|10 years ago|reply
Is Google only testing in California? It seems that the data for accidents was given for the whole of the US, but it should be compared to California only. I don't know if it's noticeably different or not, but I'd think that snow and ice would increase the accident rate in some states.

Snow and ice I think would provide a challenge for self-driving cars.

[+] bertil|10 years ago|reply
I am surprised that Google has not started having their software recording on cars that are driven by humans, to learn. They could increasingly have feedback to say “Our software would not drive that fast/that close to that curb” and progressively simulate a lot more dangerous situation and learn about tough situations than they currently can.
[+] occamrazor|10 years ago|reply
They have a vast number of human operated vehicles with extensive telemetry data: Street view cars.
[+] argonaut|10 years ago|reply
This is a harder problem than you think it is. This is a classic reinforcement learning problem, and it is incredibly difficult in the real world. It's easy in discrete state-space, perfect information, turn-based games like Go (the recent advance there). Very difficult in robotics.
[+] joe_the_user|10 years ago|reply
How safe does a self-driving car have to be before it is allowed on the road?

I would say that's going to be a societal decision rather than an engineering decision.

An interesting point is that the introduction of the private automobile itself was an imposition on the social space of that time which basically was wildly unsafe with automobile accidents still being on of the leading causes of death in this country.

If it's determined that self-driving cars will be allowed, ordinary drivers will be forced to adjust to their presence and will have to learn their quirks.

Certainly, cellular phones realistically are being used by a significant fraction of drivers today and if the accident rate has gone up at worst only slightly due to that (not enough to counter-weight other safety measure), it's because non-cell users have adapted to the presence of the cell-user, however annoying that might be.

[+] iamleppert|10 years ago|reply
All of the brouhaha about self driving cars is moot until real user testing begins. I'm not talking about driving a google mobile around the streets of mountain view with an engineer in the driver's seat.

These vehicles (I would hope) are designed to drive around people in different circumstances. They're software for use by people and until we have realistic tests with different kinds of people, ages, driving experience levels, etc. it's all academic.

I'm somewhat wondering if all the self driving car stuff by Google and Tesla is primarily a vehicle (no pun intended) for marketing rather than actual tech that will yield a real product.

[+] bertil|10 years ago|reply
It seems, from previous reports by Google, that their ability to understand discrepancy between sensors relies on knowing the road very well, not just the map with nuances like “cars can go there, but it’s mainly a pedestrian street” but more local undocumented habits. They mentioned details that prove that they would have to drive a lot in new cities to learn enough and make their car safe.

The one that I remember is that they have started driving in Austin and came across a very local species of vehicle with a unique habit: the hipster fixie rider with his stand-stop motion at red lights. Committed bikers do not have free wheels and stay standing in place not by putting their feet down but rocking back and forth; that motion was new and not interpreted clearly by the car, that hesitated to see it as a false start. It’s since been fixed. The article didn’t specify if the shirt pattern had any meaningful weight in the interpretation.

[+] willvarfar|10 years ago|reply
I can't wait for self driving cars. End of that discussion each time we eat out about who is going to drive us home...
[+] TazeTSchnitzel|10 years ago|reply
If the humans always take the wheel in dangerous situations, how do you know the cars are safe? The cases where they are most likely to get into trouble are the cases where the AI is not in operation!
[+] tim333|10 years ago|reply
>we have to figure out just how to test these vehicles so we can know when a safety goal has been met. We also have to figure out what the safety goal is.

Or you can just crack ahead and try them with users like Tesla.

[+] dsfyu404ed|10 years ago|reply
The average human driver is consciously forgoing the option of driving in the hyper-conservative manner that the google cars do...