If it's a subjective matter tied to perception of risk rather than actual, statistical risk, such perception can be swayed.
The challenge remains that people will be killed in accidents involving autonomous control. And we anticipate that the number of people killed will be fewer, hence 'saving lives'. However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents. There will be cases where a court determines that the autonomous system was the cause. Families of those killed will want justice, while those separately saved by autonomous systems may never be heard from in the same case.
I expect that in the end it will come down to a business decision, and that decision will be informed by an actuarial exercise: Will profits and insurance be able to cover the costs of defending and settling such cases. Who knows, maybe the threshold is crossed at 5x safer.
It strikes me that a useful analogy here is the adoption of automatic elevators in buildings. In some ways, it's amazing that pretty much everyone in industrialized countries is OK with being locked in a windowless box controlled entirely by a computer, hanging over a hundreds of foot deep shaft, and in fact many people were terrified of that when Elevator operators were first replaced with computers. Some places even had operators employed to simply stand there and push the buttons to provide confidence that an trained expert was there, even though they didn't actually contribute to safety.
Eventually, autonomous elevators got common enough that people will look at you really funny if you're not willing to ride in one, even though they are still responsible for ~20-30 deaths per year.
Somehow the idea of a computer error killing me seems way worse than at least having a chance to save myself, since I’m a very cautious driver (though unlikely safer than average by 5x) . Self driving cars need to get to airline level safety where crashes are a rare thing and most people don’t think twice about giving up control to the pilots/auto pilot. If that takes expensive Lidar thats what we should use. I can’t imagine ever feeling good about trusting my life to a computer vision algorithm.
To add to the ‘different set of accidents’ point - the accidents caused by autonomous cars will most likely also be harder to accept. They will be accidents that a human driver (at least with the benefit of hindsight) would appear unlikely to have caused.
Things like that Tesla driving straight into a wall and killing its driver. Or not seeing a van right in front of it because the sun was too bright. Things where a bit of common sense (not something AI excels at) might have avoided them.
On the flip side the lives saved will be based on having super human reaction times / situational awareness and will be things no person would have been able to do.
So maybe there’ll be a battle of public opinion (and PR!) weighing these things against each other.
>> The challenge remains that people will be killed in accidents involving autonomous control.
While that's almost certainly true (and depending on one's interpretation of autonomous is already true), some people already believe that zero deaths in transport is possible: https://visionzeronetwork.org/about/what-is-vision-zero/. If it's possible to hold human drivers to that standard, why not autonomous systems as well?
> However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents.
That's an interesting insight into part of the problem, thanks for pointing it out.
Indeed in America if you ride a motorcycle but don't ride drunk, don't ride unlicensed, don't ride an unregistered bike, and do wear a helmet, you are orders of magnitude less likely to die because you are in a statistcally safer demographic of riders. Yet, any one of the riders in that safer demographic could still be one of the ones hit by a red light runner and killed without their blame. We would definitely feel that death more tragic and senseless.
> I expect that in the end it will come down to a business decision, and that decision will be informed by an actuarial exercise: Will profits and insurance be able to cover the costs of defending and settling such cases.
I'm seeing type of phrasing occur more and more. Once the defendant can be named in a legal action, we'll start seeing SDVs. IMHO, the worry isn't that they will kill, but that no one is to blame.
Although it will change the day to day narrative of a pedestrian. E.g., My thought process will change from this person might not see me, to, that car's AI might not see me. ... or even "Oh, its a Toyota, they kill more than Hyundai... stand back!" But now I'm just writing SciFi.
> However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents.
This is a really good point. I drive really conservatively and like to think I will never ever cause an accident let alone a fatal one. I think if this lever was taken away I would have a hard time accepting the automated driving a significant amount of time.
> There will be cases where a court determines that the autonomous system was the cause.
At the moment, manufacturers almost totally escape blame for fatal accidents [involving human drivers] - it's understood societally and in the legal system that the human driver was the one at fault.
That isn't a totally accurate picture of the responsibility. The manufacturer provided a vehicle that included a risk of fatal accident. (Reducing this wrong and describing it as 'lives saved' feels uncomfortable to me)
With an autonomous system, blame for fatalities can no longer be placed on a human driver: and yet, there is still a failure of responsibility (maybe this will be a more accurate placement of blame)
Idk about risk: that is hard to establish. There are so many conditions in which an automatic pilot hasn't been tested. We don't even know the factors involved in estimating the risk: e.g., is it dependent on the human co-pilot? And self-driving cars may change the car usage patterns, exerting a contextual influence on the risk.
Then there's the question of responsibility. Who will be held responsible when the automatic pilot is driving? If it's the human, then a high risk of causing an accident will be unacceptable to many drivers.
Obviously the car manufacturer will be liable when a fault in their product kills people.
And that is as it should be. The costs will be baked into the price of the vehicle. This aligns incentives. The manufacturer wants to pay as little as possible, and the owner will want to be safe.
With the car recording video and other data, determining what actually happened should be simple and reliable.
Note that the car owner no longer pays any insurance in this system.
> we anticipate that the number of people killed will be fewer, hence 'saving lives'. However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents. There will be cases where a court determines that the autonomous system was the cause. Families of those killed will want justice, while those separately saved by autonomous systems may never be heard from in the same case.
This reasoning was addressed with the VICP program for vaccines in the U.S., based on the idea that vaccines save many lives overall, but still injure a small number of people who will want to be compensated for that injury.
Maybe there should be a self-driving-car injury compensation program along similar lines once particular cars are convincingly proven to be even moderately safer than human drivers. People might be mad about it, but maybe the precedent of the apparent success of the VICP would be persuasive (at least for courts and legislators, and maybe for some people actually injured by self-driving cars).
A counterargument is that vaccine injuries are generally unforeseeable and reflect absolutely no fault on the part of the injured person (or injured person's guardian), while some people injured by self-driving cars will bear some fault for their own injuries, which insurers or manufacturers might well want the opportunity to adjudicate. (Otherwise, there could be a perverse incentive to deliberately do dangerous things around self-driving cars to provoke an injury and receive a payment, something I've heard already sometimes happens with personal injury claims against human drivers in some parts of the world.)
> However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents.
So far it seems that this is very much the case. Autonomous cars do relatively well in highway scenarios whereas they appear to do poorly recognizing bicycles, for instance. Reducing safety to one single metric would be a big mistake.
> Will profits and insurance be able to cover the costs of defending and settling such cases.
Risk will be transferred to individual consumers through "pay attention / hands on wheels / you're still the driver" warnings. Those warnings will never go away, even with vehicles that are in all other ways marketed as L5.
I'm wondering whether the first steps ought to be to make current driving much safer through automation - like detecting drunk driving, falling asleep, and such and limiting speed, refusing to start, alerting police and/or nearby vehicles etc. The lessons from that might better inform the long tail of cases that autonomous vehicles will eventually face?
My guess is it's much easier for the public to understand the risks and liability of human drivers than AI.
Humans often get distracted or misinterpret signals. And their errors stop with the driver. AI could react in more unexpected ways when it goes sideways. Humans too can get weird when chemically impaired but the liability is much easier to attribute.
Sensationalism will win out quite a bit. And responsibility. Its just a tough problem.
- People developing Bell's Palsy at the same rate with vaccine than not. But suddenly only those with the vaccine show up in the social media feeds. Because nobody just posts "I just got bell's palsy" but now they do because people are paying attention. Same will happen with AVs. "I just got into an AV accident" will make headlines, while "I just dozed off and hit a kid" will barely circulate.
- People inherently trust humans over technology. Just because. So they will be quick to distrust autonomous vehicles. I already had convos about the fact that yes, teslas do kill, but on the whole self-driving teslas do kill far less than non.
- When a human drives, the liability is on the human. When a car self-drives, the liability might be on the manufacturer.
Would this baseline include all the accidents from distracted drivers, drunk drivers, drug drivers etc.? Or is it referring to an average human driver who isn't intentionally breaking the law?
If the baseline includes all these sorts of human error, I see no issue with holding robots to a higher standard. Imagine if we rolled out robot policemen who only executed black people for no reason at the same rate as humans do.
A lot of comments are focusing on safety via driving better. But with self-driving vehicles, can't we make the layout of the car safer, and thus accidents cause less harm to the people inside?
For example, right now because we need to see the road, I assume there is significantly more danger from the windshield vs. a padded back on both sides of the car with passengers facing each other like in a train car.
It seems likely that we can make self-driving vehicles much safer, even with the same number of collisions, by just changing the layout.
On a related note, before we get to self-driving cars I really wish we had un-crashable cars.
They have systems like active lane keep which are supposed to increase safety but do the exact opposite of what's safe. For example suppose a driver falls asleep or gets a heart attack. The lane keep will beep a few times quietly and disable itself when the car reduces speed to under 40mph, causing the car to then go straight and crash, instead of aggressively lane keeping until the car comes to a complete stop while increasing the beep volume the whole way, and then putting on emergency flashers, which would be the safe thing to do.
For handling rapid deceleration, I think you’d be best off with seats facing the rear. A cushioned seat can distribute force much more evenly than a seatbelt when the car stops suddenly and your inertia wants to keep you going, and the backs could be reinforced to block anything coming into the passenger compartment from the collision. Hard to do that with a windshield.
This seems pretty reasonable and also very possible to achieve. It would be insane to allow a technology on the streets that makes as many mistakes as humans are making. I certainly wouldn’t use self driving cars if they killed 30000 people per year like humans are doing right now. How would you assign responsibility for crashes? Our current system is far from perfect but at least it’s something people understand and know how to navigate. And there are drivers that are better and more cautious than others. So it’s not just an illusion of control.
What so many autonomous car advocates seem to miss is that it is nearly impossible to meaningfully compare relative safety with current self driving cars, because we don't have level 5 autonomy yet.
In order to compare them with current technology, you'd have to be able to answer the question: how safe would human drivers actually be if they didn't have to perform their most difficult tasks? Because that is what current autonomy does.
I'm willing to believe that current tech is capable of being safer than human drivers, simply because they do so many things way better than humans do, like stopping for pedestrians and safely navigating around cyclists. But to compare them in general, that is left to be proven. You can't just compare incidents per mile driven, because autonomous vehicles can conveniently opt out of driving whenever the task gets too hard.
5x safer isn't an unreasonable threshold to ask for. All life-critical, engineered systems incorporate a safety margin.
To win buy-in from the population at large, you sure as heck better have overwhelming statistics. Marginal just won't cut it.
When you're talking about taking control away from the driver, all you need is one accident a human would have prevented to create political backlash and deteriorate trust in the system. Just because several other unvoiced lives may have been saved doesn't offer consolation to the victims or deem the loss acceptable. I think of it as analogous to the legal doctrine that convicting one innocent is worse than letting 10 guilty individuals go free.
I'm always skeptical when developers claim a computer can do a better job than a human, as I've encountered so many edge cases the programming just never adequately accounted for. It will take time and a great deal of experience running these platforms in the wild until they become truly resilient. I would be quite pissed off to find the severed limb I suffered in a crash was due to programming not sufficiently distinguishing e.g. road paint from water streaks.
And the metric you choose to define "safer" will never be perfect. Extra buffer helps offset any bias or gaps in your methodology and capture more of the long tail of "one-offs" never accounted for in version 1.
I'm a big believer that driverless vehicles will provide huge improvements to our quality of life, I just feel there's a lot of misrepresentation taking place out there today as to how far along these systems are.
This should be calibrated to the risk the top X% of cautious/safe drivers, and exclude reckless, inexperienced, or intoxicated drivers. As a safe driver, you shouldn't have to accept risk calibrated to "average" (i.e. drunk, reckless) driver.
Why should it be? In the end the dead bodies count and it doesn't matter whether a cautious or inexperienced driver killed them. Inexperienced drivers are prerequisite of experienced drivers, there's no way to get rid of them. Excluding them from statistics is just discounting those deaths as... somehow less important?
If a self-driving vehicle is only 1.5x (instead of 5x) as safe as the average human driver then you're not trading between death by humans vs. death by machine, you're primarily trading between death by human and spared by machine and only secondly between the former.
Nobody will be forcing you to buy self-driving car for quite a while. But as a safe driver, you should care about eliminating the most unsafe drivers from the roads.
I've questioned the lack of driving experience as a risk factor since the pool of experienced driver excludes those who died becoming experienced.
Assuming someone has a certain (constant) probability of excluding himself from the driving pool every year, over time the average percentage will drop as folks most susceptible from excluding themselves will have already done so.
Exactly. Because now we can punish those individual drivers, lock them up, take away their car and license, but are we going to pull the plug on all cars with auto-pilot X because X is causing accidents? Is a small change in the software enough to establish it as a new driver? It's "smoking is good for you" all over again.
No, on the contrary: deaths through those drivers can be eliminated. It only makes sense to look at total number of deaths, including by alcohol, drugs, inexperienced drivers, elderly drivers, distracted drivers (smartphone, etc.).
Engendering trust and reducing materially regressive liability/litigiousness is a good call - and something that SHOULD be set as a standard by an external body.
IMHO this is typically a good role for government regulation - setting a standard measurement of outcome for the public good, but not dictacting HOW that should be achieved.
I have an older friend whose driving terrifies me, but who lives in an area with effectively zero public transportation or reliable cab service. While I don't want to see this person on the roads, the alternative is literally moving into a senior community (which would probably be the death of this person).
Frankly, if self-driving cars became .75x as safe as the average human driver, it would still a net safety improvement if got this person out from behind the wheel.
This title is awful (but it was copied from the site). What it should say is "Study finds that most people surveyed didn't trust self driving cars until they were five times safer".
More like "people who have never owned or been in a self driving car..." faster horses.
That said, it will happen. I just wonder what will happen as self driving car safety exceeds human drivers. Will people be prohibited/disincentivized to drive?
That might be their current stated preference, but I don't think it will be most people's actual choices. Imagine if self driving was available on every car right now with the press of a button and it was as safe or twice as safe as a normal driver. How many people would press that button, start texting, and just continuing to progress to paying less and less attention ?
People already don't pay the attention they should when driving or when using a driver assistant system.
Based on the abstract, it looks like this is an attempt to measure how safe self-driving cars need to be in order for people to prefer using them. It is not any sort of requirement from the NIH.
What's so magical about 5? Why not 4x or 6x. 2x safer will be 500,000 lives saved yearly. We can see that even 1.25x safer is very significant. Just weird seeing that magic number 5x...
I don't think it's useful to talk about global traffic deaths in this context. Since obviously regulation will change by country, the difficulty of developing self driving will change by country, and road safety varies enormously by country. The US is likely to get self-driving first, but is already way safer than the average country, and the countries where deaths are higher are less likely to be able to afford the roll out of self-driving cars.
In the US there are ~36,000 deaths from motorvehicle accidents.
To give some context, America could improve their fatality right by 5x by bringing themselves into line with the safety standards observed in Western Europe whose fatality rate is already at around 2.7 per 100,000 people.
It's also important to remember that self-driving is likely to represent the safest journeys - highway commutes etc.
Not magical, but you have to pick something. I suspect the goal is to pick a number that most would think is better than an experienced, awake, attentive (not looking at a phone) human at the wheel. So even the safest drivers would be safer on the road as the number of autonomous driven cars increases.
At only 1.25x it might well be worse than you. Keep in mind that the average safe human driver includes people that are tired, high, drunk, unlicensed, mentally ill, physically compromised, etc.
While common, someone getting killed because someone is drunk or asleep is much more acceptable than having a computer make a mistake.
If we want society as a whole to accept autonomous cars it's best to show a clear benefit to society, not just better than 51% of drivers.
I rented a 2019 Mercedes last week and drove it for over 1200 miles, most of which was driven with the cars driving assist technologies enabled.
My guess is that because this car drives so "carefully", such as automatically following at a safe distance (leaving maybe a 3 second gap between the car in front of it), human drivers will end up causing many more accidents. There must have been more than 50 drivers (with many annoyed stares into my window as they passed) that made unnecessary lane changes to go around me just to then closely follow the car in front of me.
This large gap may make it seem like the car is going slower than it is, as so many drivers tried to overtake me but failed as slower traffic in the other lanes blocked them.
Human drivers may just become worse over time as more law-abiding autonomous vehicles hit the road. "5x" might not be as much of an improvement in the future.
Problem is your the average for human drivers is brought down mostly by bad drivers. However, even the if we talking about averages 50% of people are better than average. I would not feel to confident in a self driving car that is only approximately as good as 1 out of 2 drivers. I would want the car to be better than high upper percentile. Especially knowing an automated system can have reaction times that can put any humans reaction times to shame.
So yes equal safety is far from enough. Especially considering 50% would be better drivers than a self driving car that was only as good as the average driver. Your asking 50% of people and some percentage of people who overestimate their abilities to trust a car that would perform worse than them.
Misleading title. The article is not saying we need more than equal safety; it's saying that self-driven vehicles are perceived as more dangerous and people in the study wanted them to be 4-5x safer than human-driven vehicles to overcome this.
Human beings need a target for vengeance and hatred. If someone kills your child, you can hate them. If an automated car kills your child, you don't have anyone to hate. Saying that this is about safety thresholds is a distraction from the true human problem exposed by automated cars:
"Which individual will held accountable and risk jailtime if their car kills someone you love, and how can this individual be identified from the appropriate government registries within 24 hours of a death?"
Until this is clearly defined in law, automated driving will continue to be resisted under any number of plausible justifications, and arguing with those justifications will have little effect.
What most people seem to misunderstand about "driving" is that it is not a sensory or stimulus response problem. It is a cognition problem.
Computers or computer based "AI" are good at solving bounded problems that do not require open ended or on the fly model building and judgement exercising in real time. At this biological intelligence, honed by millions of years of selective evolution still excels.
Computers can "solve" GO or Chess because fundamentally the rules are simple and the models required to play these games are subject to only a few, static constraints.
Driving, in all conditions on all roads, on the other hand, requires a flexible model of the real world that approaches that built by sentient biological intelligences.
The problem is not sensor or perception latency.
Sure LIDAR can blow human perception out of the water.
But that does not matter.
What matters is making the correct decision based on sensory inputs using a high fidelity model of the real world.
Computer does not understand the difference between two people playing catch by the roadside, parallel to the road and a situation where a child might be chasing after a soccer ball. This is not just combinatorics and probability...it is theory of mind. Thus until AGI is invented FSD will be a misnomer.
Doesn't matter how much faster a computer can perceive if it does not know how to integrate the raw data it receives into a model of the world that yields correct decisions to the circumstances.
[+] [-] 11thEarlOfMar|5 years ago|reply
The challenge remains that people will be killed in accidents involving autonomous control. And we anticipate that the number of people killed will be fewer, hence 'saving lives'. However, the lives lost in autonomous accidents will be a different set of people than those that would have died in human driven accidents. There will be cases where a court determines that the autonomous system was the cause. Families of those killed will want justice, while those separately saved by autonomous systems may never be heard from in the same case.
I expect that in the end it will come down to a business decision, and that decision will be informed by an actuarial exercise: Will profits and insurance be able to cover the costs of defending and settling such cases. Who knows, maybe the threshold is crossed at 5x safer.
[+] [-] HALtheWise|5 years ago|reply
[+] [-] porknubbins|5 years ago|reply
[+] [-] jonplackett|5 years ago|reply
Things like that Tesla driving straight into a wall and killing its driver. Or not seeing a van right in front of it because the sun was too bright. Things where a bit of common sense (not something AI excels at) might have avoided them.
On the flip side the lives saved will be based on having super human reaction times / situational awareness and will be things no person would have been able to do.
So maybe there’ll be a battle of public opinion (and PR!) weighing these things against each other.
[+] [-] wffurr|5 years ago|reply
While that's almost certainly true (and depending on one's interpretation of autonomous is already true), some people already believe that zero deaths in transport is possible: https://visionzeronetwork.org/about/what-is-vision-zero/. If it's possible to hold human drivers to that standard, why not autonomous systems as well?
[+] [-] ehnto|5 years ago|reply
That's an interesting insight into part of the problem, thanks for pointing it out.
Indeed in America if you ride a motorcycle but don't ride drunk, don't ride unlicensed, don't ride an unregistered bike, and do wear a helmet, you are orders of magnitude less likely to die because you are in a statistcally safer demographic of riders. Yet, any one of the riders in that safer demographic could still be one of the ones hit by a red light runner and killed without their blame. We would definitely feel that death more tragic and senseless.
[+] [-] SoSoRoCoCo|5 years ago|reply
I'm seeing type of phrasing occur more and more. Once the defendant can be named in a legal action, we'll start seeing SDVs. IMHO, the worry isn't that they will kill, but that no one is to blame.
Although it will change the day to day narrative of a pedestrian. E.g., My thought process will change from this person might not see me, to, that car's AI might not see me. ... or even "Oh, its a Toyota, they kill more than Hyundai... stand back!" But now I'm just writing SciFi.
[+] [-] foobarian|5 years ago|reply
This is a really good point. I drive really conservatively and like to think I will never ever cause an accident let alone a fatal one. I think if this lever was taken away I would have a hard time accepting the automated driving a significant amount of time.
[+] [-] throwaway2245|5 years ago|reply
At the moment, manufacturers almost totally escape blame for fatal accidents [involving human drivers] - it's understood societally and in the legal system that the human driver was the one at fault.
That isn't a totally accurate picture of the responsibility. The manufacturer provided a vehicle that included a risk of fatal accident. (Reducing this wrong and describing it as 'lives saved' feels uncomfortable to me)
With an autonomous system, blame for fatalities can no longer be placed on a human driver: and yet, there is still a failure of responsibility (maybe this will be a more accurate placement of blame)
[+] [-] tgv|5 years ago|reply
Then there's the question of responsibility. Who will be held responsible when the automatic pilot is driving? If it's the human, then a high risk of causing an accident will be unacceptable to many drivers.
[+] [-] BurningFrog|5 years ago|reply
And that is as it should be. The costs will be baked into the price of the vehicle. This aligns incentives. The manufacturer wants to pay as little as possible, and the owner will want to be safe.
With the car recording video and other data, determining what actually happened should be simple and reliable.
Note that the car owner no longer pays any insurance in this system.
[+] [-] schoen|5 years ago|reply
This reasoning was addressed with the VICP program for vaccines in the U.S., based on the idea that vaccines save many lives overall, but still injure a small number of people who will want to be compensated for that injury.
https://www.hrsa.gov/vaccine-compensation/index.html
Maybe there should be a self-driving-car injury compensation program along similar lines once particular cars are convincingly proven to be even moderately safer than human drivers. People might be mad about it, but maybe the precedent of the apparent success of the VICP would be persuasive (at least for courts and legislators, and maybe for some people actually injured by self-driving cars).
A counterargument is that vaccine injuries are generally unforeseeable and reflect absolutely no fault on the part of the injured person (or injured person's guardian), while some people injured by self-driving cars will bear some fault for their own injuries, which insurers or manufacturers might well want the opportunity to adjudicate. (Otherwise, there could be a perverse incentive to deliberately do dangerous things around self-driving cars to provoke an injury and receive a payment, something I've heard already sometimes happens with personal injury claims against human drivers in some parts of the world.)
[+] [-] xenocyon|5 years ago|reply
So far it seems that this is very much the case. Autonomous cars do relatively well in highway scenarios whereas they appear to do poorly recognizing bicycles, for instance. Reducing safety to one single metric would be a big mistake.
[+] [-] throwawaygh|5 years ago|reply
Risk will be transferred to individual consumers through "pay attention / hands on wheels / you're still the driver" warnings. Those warnings will never go away, even with vehicles that are in all other ways marketed as L5.
[+] [-] sriku|5 years ago|reply
[+] [-] paulryanrogers|5 years ago|reply
Humans often get distracted or misinterpret signals. And their errors stop with the driver. AI could react in more unexpected ways when it goes sideways. Humans too can get weird when chemically impaired but the liability is much easier to attribute.
[+] [-] Justsignedup|5 years ago|reply
- People developing Bell's Palsy at the same rate with vaccine than not. But suddenly only those with the vaccine show up in the social media feeds. Because nobody just posts "I just got bell's palsy" but now they do because people are paying attention. Same will happen with AVs. "I just got into an AV accident" will make headlines, while "I just dozed off and hit a kid" will barely circulate.
- People inherently trust humans over technology. Just because. So they will be quick to distrust autonomous vehicles. I already had convos about the fact that yes, teslas do kill, but on the whole self-driving teslas do kill far less than non.
- When a human drives, the liability is on the human. When a car self-drives, the liability might be on the manufacturer.
[+] [-] rozab|5 years ago|reply
If the baseline includes all these sorts of human error, I see no issue with holding robots to a higher standard. Imagine if we rolled out robot policemen who only executed black people for no reason at the same rate as humans do.
[+] [-] gamerDude|5 years ago|reply
For example, right now because we need to see the road, I assume there is significantly more danger from the windshield vs. a padded back on both sides of the car with passengers facing each other like in a train car.
It seems likely that we can make self-driving vehicles much safer, even with the same number of collisions, by just changing the layout.
[+] [-] dheera|5 years ago|reply
They have systems like active lane keep which are supposed to increase safety but do the exact opposite of what's safe. For example suppose a driver falls asleep or gets a heart attack. The lane keep will beep a few times quietly and disable itself when the car reduces speed to under 40mph, causing the car to then go straight and crash, instead of aggressively lane keeping until the car comes to a complete stop while increasing the beep volume the whole way, and then putting on emergency flashers, which would be the safe thing to do.
[+] [-] wlesieutre|5 years ago|reply
[+] [-] bsder|5 years ago|reply
This doesn't even require an autonomous car--just "drive by wire" electric vehicles.
[+] [-] vernie|5 years ago|reply
[+] [-] spaetzleesser|5 years ago|reply
[+] [-] darksaints|5 years ago|reply
In order to compare them with current technology, you'd have to be able to answer the question: how safe would human drivers actually be if they didn't have to perform their most difficult tasks? Because that is what current autonomy does.
I'm willing to believe that current tech is capable of being safer than human drivers, simply because they do so many things way better than humans do, like stopping for pedestrians and safely navigating around cyclists. But to compare them in general, that is left to be proven. You can't just compare incidents per mile driven, because autonomous vehicles can conveniently opt out of driving whenever the task gets too hard.
[+] [-] rkagerer|5 years ago|reply
To win buy-in from the population at large, you sure as heck better have overwhelming statistics. Marginal just won't cut it.
When you're talking about taking control away from the driver, all you need is one accident a human would have prevented to create political backlash and deteriorate trust in the system. Just because several other unvoiced lives may have been saved doesn't offer consolation to the victims or deem the loss acceptable. I think of it as analogous to the legal doctrine that convicting one innocent is worse than letting 10 guilty individuals go free.
I'm always skeptical when developers claim a computer can do a better job than a human, as I've encountered so many edge cases the programming just never adequately accounted for. It will take time and a great deal of experience running these platforms in the wild until they become truly resilient. I would be quite pissed off to find the severed limb I suffered in a crash was due to programming not sufficiently distinguishing e.g. road paint from water streaks.
And the metric you choose to define "safer" will never be perfect. Extra buffer helps offset any bias or gaps in your methodology and capture more of the long tail of "one-offs" never accounted for in version 1.
I'm a big believer that driverless vehicles will provide huge improvements to our quality of life, I just feel there's a lot of misrepresentation taking place out there today as to how far along these systems are.
[+] [-] anoyesnonymous|5 years ago|reply
[+] [-] the8472|5 years ago|reply
If a self-driving vehicle is only 1.5x (instead of 5x) as safe as the average human driver then you're not trading between death by humans vs. death by machine, you're primarily trading between death by human and spared by machine and only secondly between the former.
[+] [-] Isinlor|5 years ago|reply
[+] [-] grecy|5 years ago|reply
But you already do. Every day you're near a road, there is chance the next vehicle around the bend is drunk or reckless or using their phone.
It doesn't even matter if you are in a vehicle or not - even as a pedestrian you already deal with them every day.
It sucks, but it's reality.
[+] [-] 908B64B197|5 years ago|reply
Assuming someone has a certain (constant) probability of excluding himself from the driving pool every year, over time the average percentage will drop as folks most susceptible from excluding themselves will have already done so.
[+] [-] tgv|5 years ago|reply
[+] [-] dunefox|5 years ago|reply
[+] [-] sreekotay|5 years ago|reply
IMHO this is typically a good role for government regulation - setting a standard measurement of outcome for the public good, but not dictacting HOW that should be achieved.
Now, we're just haggling over the price...
( as not-churchill infamously didn't say...)
[+] [-] kstrauser|5 years ago|reply
Frankly, if self-driving cars became .75x as safe as the average human driver, it would still a net safety improvement if got this person out from behind the wheel.
[+] [-] Scandiravian|5 years ago|reply
[+] [-] Tade0|5 years ago|reply
As a penalty for bad driving.
I believe people would be much more accepting of machines taking the wheel if it meant that at least "this guy there" isn't driving.
[+] [-] jedberg|5 years ago|reply
[+] [-] m463|5 years ago|reply
That said, it will happen. I just wonder what will happen as self driving car safety exceeds human drivers. Will people be prohibited/disincentivized to drive?
[+] [-] c1505|5 years ago|reply
[+] [-] dooglius|5 years ago|reply
[+] [-] segmondy|5 years ago|reply
[+] [-] Traster|5 years ago|reply
In the US there are ~36,000 deaths from motorvehicle accidents.
To give some context, America could improve their fatality right by 5x by bringing themselves into line with the safety standards observed in Western Europe whose fatality rate is already at around 2.7 per 100,000 people.
It's also important to remember that self-driving is likely to represent the safest journeys - highway commutes etc.
[+] [-] sliken|5 years ago|reply
At only 1.25x it might well be worse than you. Keep in mind that the average safe human driver includes people that are tired, high, drunk, unlicensed, mentally ill, physically compromised, etc.
While common, someone getting killed because someone is drunk or asleep is much more acceptable than having a computer make a mistake.
If we want society as a whole to accept autonomous cars it's best to show a clear benefit to society, not just better than 51% of drivers.
[+] [-] Tempest1981|5 years ago|reply
> psychological mechanisms influencing the decision-making regarding acceptable risk
[+] [-] davidmurdoch|5 years ago|reply
My guess is that because this car drives so "carefully", such as automatically following at a safe distance (leaving maybe a 3 second gap between the car in front of it), human drivers will end up causing many more accidents. There must have been more than 50 drivers (with many annoyed stares into my window as they passed) that made unnecessary lane changes to go around me just to then closely follow the car in front of me.
This large gap may make it seem like the car is going slower than it is, as so many drivers tried to overtake me but failed as slower traffic in the other lanes blocked them.
Human drivers may just become worse over time as more law-abiding autonomous vehicles hit the road. "5x" might not be as much of an improvement in the future.
[+] [-] anfilt|5 years ago|reply
So yes equal safety is far from enough. Especially considering 50% would be better drivers than a self driving car that was only as good as the average driver. Your asking 50% of people and some percentage of people who overestimate their abilities to trust a car that would perform worse than them.
[+] [-] howlgarnish|5 years ago|reply
[+] [-] floatingatoll|5 years ago|reply
"Which individual will held accountable and risk jailtime if their car kills someone you love, and how can this individual be identified from the appropriate government registries within 24 hours of a death?"
Until this is clearly defined in law, automated driving will continue to be resisted under any number of plausible justifications, and arguing with those justifications will have little effect.
[+] [-] morpheos137|5 years ago|reply
Computers or computer based "AI" are good at solving bounded problems that do not require open ended or on the fly model building and judgement exercising in real time. At this biological intelligence, honed by millions of years of selective evolution still excels.
Computers can "solve" GO or Chess because fundamentally the rules are simple and the models required to play these games are subject to only a few, static constraints.
Driving, in all conditions on all roads, on the other hand, requires a flexible model of the real world that approaches that built by sentient biological intelligences.
The problem is not sensor or perception latency.
Sure LIDAR can blow human perception out of the water.
But that does not matter.
What matters is making the correct decision based on sensory inputs using a high fidelity model of the real world.
Computer does not understand the difference between two people playing catch by the roadside, parallel to the road and a situation where a child might be chasing after a soccer ball. This is not just combinatorics and probability...it is theory of mind. Thus until AGI is invented FSD will be a misnomer.
Doesn't matter how much faster a computer can perceive if it does not know how to integrate the raw data it receives into a model of the world that yields correct decisions to the circumstances.