"Uber recently placed an order for 24,000 Volvo cars that will be modified for driverless operation. Delivery is scheduled to start next year."
Wow. The other driverless car players should be all over this lobbying to shut Uber down. If Uber massively deploys a commercial service with subpar quality in order to "win", and then those cars start getting into accidents, the entire field is going to be delayed by 10 years. The general public is not going to just think "Uber is bad", they are going to think "self-driving cars are bad". Politicians will jump all over it and we'll see very restrictive laws that no one will have the guts to replace for a long time.
And honestly if that happens, that's probably what we would need anyway. If the industry doesn't want to be handcuffed they need to figure out some really good standardized regulations on sharing data with law enforcement, how to determine fault for self-driving vehicles, and what penalties there should be. That are fair and strict.
> We don't need redundant brakes & steering or a fancy new car; we need better software," wrote engineer Anthony Levandowski
Any engineer with this attitude needs to learn the lesson of the Therac-25. The issues in the Ars article are very similar to section 4 "Causal Factors" of the report[1].
> To get to that better software faster we should deploy the first 1000 cars asap.
Is that admitting that they do not have the "better software" and intend to deploy 1000 cars using "lesser software"? That's treading dangerously close to potential manslaughter charges if prove this willful contempt for safety to a court.
To play Kalanick's adversary, he might be arguing for more real-world data collection. Tesla famously equipped most of their cars with more sensors than were required at the time of delivery, using the data to drive development of the Autopilot function that was later added to the cars.
> We don't need redundant brakes & steering or a fancy new car; we need better software," wrote engineer Anthony Levandowski
He is clearly right about that. Human-driven cars are safety critical and already do fine without redundant brakes and steering. How many crashes are due to brake or steering failure? I'm guessing it's well under 10%.
Most human crashes are due to bad driving, and for computers it will be the same. I mean, even this fatal crash probably could have been prevented with better software. It's not like the brakes failed. They just weren't applied.
> To get to that better software faster we should deploy the first 1000 cars asap.
I get the feeling based on comments here that there is a severe lack of ethical and critical thinking among engineers and developers. I recognize that this is only a vocal minority but the constant mantra of "move fast and break things", where getting rich at any cost is seen as a virtue, has made me extremely disillusioned with this brand of startup culture. Doubly so when people are trading stock tips on how to profit from tragedy by supporting the worst actors in the field.
"Uber announced that it had driven 2 million miles by December 2017 and is probably up to around 3 million miles today. If you do the math, that means that Uber's cars have killed people at roughly 25 times the rate of a typical human-driven car in the United States."
Wow there goes that "safer than human drivers" argument.
With one data point, you can't extrapolate much. This is misuse of statistics.
Consider if there was a new lottery and you weren't sure what the odds of winning were. You play it three weeks in a row and the third time you win a million dollars. Conveniently, no one else tries the new lottery yet.
Does it follow then that the odds of winning a million dollars are 1 in 3? Or should you play it a few more times before you declare to all that one in three plays will make one a millionaire?
Driverless cars are also being tested in relatively nice driving conditions. People, on the other hand, drive in all sorts of conditions. X deaths per Y easy driving miles is going to translate to many more than X deaths per Y representative driving miles.
I think focusing on the safety statistics is somewhat of a red herring. Uber wins when you get hung up on how well its self-driving cars drive, because that's something that they can improve. Instead, I think we should focus on the fact that these dangerous machines are being operated by chronically irresponsible companies and because cars in general have issues, not because we expect them to be less safe than human drivers.
Any comparison between self-driven miles and typical human-driven miles has to take into account all the times a safety driver took over driving to prevent an accident. Those self-driven miles have a huge asterisk.
It's much, much worse than that, since that 37,461 deaths number includes all deaths, including motorcycle/Truck/SUV deaths, which have higher death rates than passenger cars, perhaps 5x-10x higher.
A proper comparison in this case is comparing passenger car death rates.
And then you need to factor in other conditions, such as the fact that weather was clean, and that you should be looking to compare pedestrian/bicyclist deaths, and you see that this incident already throws out wack the death rate for autonomous vehicles.
Given exponentially-distributed distance between fatalities, this would have a 3% chance of happening if Uber cars were as safe as humans. So it's unlikely.
You can't just look at Uber to make a sweeping conclusion of all autonomous cars on the road. How many miles has Tesla, VW, Volvo, Waymo, Google, Ford, and Apple have?
I don't know how to say it kindly but there is a difference between "what" type of people the car killed. If the fatality was another rule abiding driver on the road or a pedestrian crossing at a crosswalk that would be really bad. However if it was someone not following the safety rules by j walking, then that person accepted upon themselves a higher probability of being in an accident. When making laws, for the most part, they are for the benefit of law abiding people.
I don't understand how there isn't a non-ML based piece of code that looks at moving radar and lidar returns and preforms an emergency brake, light flash, horn, or dodge if that vector would intersect with any confidence. Even slowing down to 20MPH can turn a fatal accident into an injury.
What if it has nothing to do with ML? You see a point cloud that's moving toward your lane at a speed estimate of say 2 mph. If that's below the sensor noise threshold, you might classify the cloud as a stationary object on the other lane (say a stranded car). In that case, by the time you realize that this stationary object has somehow moved itself into your own lane, it is already too late.
> We don't need redundant brakes & steering or a fancy new car; we need better software," wrote engineer Anthony Levandowski to Alphabet CEO Larry Page in January 2016.
Looks like Uber has attracted Levandowski due to his cultural fit.
Hmm, but wouldn’t his priorities be correct in the context of this crash? There hasn’t been any suggestion (so far) that the crash occurred because some hardware component stopped working; rather, it seems like the software failed to identify the pedestrian in time. So better software seems precisely what was needed. Though I can imagine that better sensors might also have helped…
Why don’t we require software engineers who work on self driving car software to go through licensing and certification.
And then, if their code results in a death, they are liable and can have their license completely revoked, and they would be unable to work on self driving cars again.
- Expecting engineers to always write perfect code is insane. Mistakes happen.
- If bad code makes it into production, that is a systemic failure not an individual one (Why didn't the bug get caught in code review, QA, etc.)
- No one is going to want to work on a project where a single failure can taint their career.
- What if I use a 3rd party lib and that is where the bug is. Who is at fault then? What if the code isn't buggy, but I'm using it in an unexpected way because of a miscommunication? If I am only allowed use code that I (or someone certified has written) development is going to move at a snails pace.
- What if I consult with an engineer who doesn't have a certification on a design decision and the failure is there, who is at fault?
- What if the best engineer on the project makes a mistake and ends up banned? Does he/she leave the project and take all their tribal knowledge with them, or are they still allowed to consult? If they can consult, what stops them from developing by proxy by telling other engineers what to write?
Not to be a dick, but this is an awful idea that would basically kill the self driving car.
From many years developing safety critical software, I reckon culture and processes are more important than certification. There are various standards for developing safety systems in other industries (defence, aviation, etc) and these standards exist for a reason. Have Uber applied any standard for their automation software? Or equivalent development processes? "Move fast and break things" is fine for an app, but not fine for controlling a vehicle.
My guess is that it's because the field is so new that there aren't really any experts that can define what are reasonable rules for said licensing and certifications
We have this requirement, at least in Europe. There are even ISO standards to follow. The one related is ISO26262. But it seems, this does not apply to those permits issued for these cars by Uber.
If you want error-free software you need a blameless culture based around process, not individual ownership of code. It should not even be possible for an error to be one individual's mistake, because by the time it hits the road it should have gone through endless code review and testing cycles.
Not sure how that could be the philosophy of any self-driving car company?
That'd be extremely foolish. And regardless of the dumb things the previous Uber CEO has done in the past and the big deal people are making over a $150 license, they have still hired some of the best engineers in the world.
You basically have to find the brightest-of-the-brightest to build AI... and Uber pays very well and puts plenty of effort into recruiting that talent.
Not to mention the massive PR and monetary risks that are inherent in killing people with your products. That would make any company highly risk-adverse.
Every engineer on this project at Uber knows very well that their car completely failed in one of its most basic expected functions. It's incredibly obvious, and a number of independent experts have said as much.
I'd be fairly surprised if there's any real appetite at Uber to continue with this now. It was never anywhere near their core competency.
"Indeed, it's entirely possible to imagine a self-driving car system that always follows the letter of the law—and hence never does anything that would lead to legal finding of fault—but is nevertheless way more dangerous than the average human driver. Indeed, such a system might behave a lot like Uber's cars do today."
It doesn't matter if Uber makes cars that are technically not at fault, if they're mowing over pedestrians at a rate significantly higher than human drivers then they should never be allowed on public roads. People mess up occasionally. The solution is not an instant death sentence administered by algorithm.
The author is making a distinction between whether Uber was legally at fault (as stated in the article, likely not) versus whether the accident was avoidable. I agree with the author's position that the accident was likely avoidable.
I think the standards are different in this case. While the pedestrian definitely should not have been where they were, and if this were an incident with a human driver, you would probably say the driver was not at fault, I think this is slightly different.
They are on the road with conditions because what they are doing is somewhat experimental still.
There is a safety driver for a reason that did not respond.
A human driver may have collided but would have responded and potentially avoided a fatality (if not a collision).
The benefits of autonomous driving completely failed on all counts in this case, which imply that being on a public road is far to early for Uber - suggesting some fault to lie with Uber or the regulators.
The other missing part is that it is the human driver who is responsible. This is a test vehicle and their job is to be ready to take over at any time as if they are driving.
It seems unlikely that the Police will find any fault because they probably don't want to have to file a criminal charge against the driver, but that is who it would go against if there was fault.
Why is everyone considering it a forgone conclusion that self driving cars will quickly become much, much safer than human driven cars? Yes, lots of people die every year in human driven car accidents. But it is equally true that our most sophisticated AI/ML can only really operate within very narrowly defined parameters (at least when compared to the huge sets of uncertain parameters humans deal with every day in the real world). Driving is perhaps one of the most unpredictable activities we can engage in, anecdotally supported by my daily commute. What if our self driving software never becomes good enough? How many more deaths are we willing to go through to find out?
I was at SXSW a few weeks ago and went to an Uber driverless car talk. They spent the first half of the talk discussing driver safety, it felt incredibly hollow.
If you really cared about safety, there are far more immediate and impactful solutions then spending billions on self-driving cars. If they came out and said that they were doing it to make money or make driving easier, it would have carried more weight. But you just can't trust a word this company says.
* Living in legal gray zones in regards to contractors vs employees
* Designed a system to avoid law enforcement
* Performed shady tactics with its competitors
* Illegally obtained the private medical records of a rape victim
* Created a workplace where sexual harassment was routine
* Illegally tested self-driving cars on public roads in California without obtaining the required state licenses.
* Possibly stole a LIDAR design from a competitor
Now their vehicle killed a pedestrian in a situation that the self driving vehicles should be much better than humans at (LIDAR can see in the dark, and the reaction time of a computer is much better than humans.)
Uber has exhausted their "benefit of the doubt" reserve. Maybe, they need to be made an example of with massive losses to investors and venture capitalists as an object lesson that ethics really do matter, and that bad ethics will eventually hurt your bank account.
>"One of my big concerns about this incident is that people are going to conflate an on-the-spot binary assignment of fault with a broader evaluation of the performance of the automated driving system, the safety driver, and Uber's testing program generally,"
Self driving cars are currently in that state where they're always in accidents but never technically at fault. When individuals have this behavior patter their insurance company drops them because if they're so frequently present when shit hits the fan they're a time bomb from a risk perspective.
if they are at fault they should be punished, but you do realize that the expectation for self driving vehicles is not to eliminate all car related deaths, right?
edit: wow this triggered some people. somehow 'if they are at fault they should be punished' got interpreted as 'they are not at fault and should not be punished'
Machine learning and AI are data hungry algorithms and the concern is there isn't enough "emergency situation" data. Also a detector can not have both a 100% probability of detection and 0% probability of false alarm. You have to sacrifice one for the other and that is usually influenced by weighted probabilities and priorities (e.g., a smooth ride).
This event has me thinking about job of the behind-the-wheel backup driver. They get an easier job than a real driver, at the cost of potentially taking the fall if an accident occurs. I wonder if the pay is better.
I actually don't think it's really easier. Continuous attention is easier to maintain than hours of boredom only to have to react out of nowhere...maybe.
"Testing" of driverless cars seem to be the wrong way around. Software should try to learn from human drivers: watch them instead of being watched by them.
The way it would work would be: the human is driving and the software is, at the same time, watching the driver and figuring out an action to take. Every time the driver's and the software's behavior differ, is logged and analyzed to figure out why there was a difference and who guessed better.
But the way testing is currently going on, it seems millions of miles are wasted where nothing happens and nothing is learned.
No, what that gets you is smooth normal driving and poor handling of emergency situations. People have tried using supervised learning for that - vision and human actions for training, steering and speed out. Works fine, until it works really badly, because it has no model of what to do in trouble.
Yeah, that doesn't work though. Basically because you would need to have an excellent situation representation to really understand the drivers' reactions to outside events. But that does not exist.
Perception and situation representation are key to mastering the driving task, and they both differ greatly between humans and machines.
phyller|8 years ago
Wow. The other driverless car players should be all over this lobbying to shut Uber down. If Uber massively deploys a commercial service with subpar quality in order to "win", and then those cars start getting into accidents, the entire field is going to be delayed by 10 years. The general public is not going to just think "Uber is bad", they are going to think "self-driving cars are bad". Politicians will jump all over it and we'll see very restrictive laws that no one will have the guts to replace for a long time.
And honestly if that happens, that's probably what we would need anyway. If the industry doesn't want to be handcuffed they need to figure out some really good standardized regulations on sharing data with law enforcement, how to determine fault for self-driving vehicles, and what penalties there should be. That are fair and strict.
selestify|8 years ago
pdkl95|8 years ago
Any engineer with this attitude needs to learn the lesson of the Therac-25. The issues in the Ars article are very similar to section 4 "Causal Factors" of the report[1].
> To get to that better software faster we should deploy the first 1000 cars asap.
Is that admitting that they do not have the "better software" and intend to deploy 1000 cars using "lesser software"? That's treading dangerously close to potential manslaughter charges if prove this willful contempt for safety to a court.
[1] http://sunnyday.mit.edu/papers/therac.pdf
phlo|8 years ago
To play Kalanick's adversary, he might be arguing for more real-world data collection. Tesla famously equipped most of their cars with more sensors than were required at the time of delivery, using the data to drive development of the Autopilot function that was later added to the cars.
PinkMilkshake|8 years ago
IshKebab|8 years ago
He is clearly right about that. Human-driven cars are safety critical and already do fine without redundant brakes and steering. How many crashes are due to brake or steering failure? I'm guessing it's well under 10%.
Most human crashes are due to bad driving, and for computers it will be the same. I mean, even this fatal crash probably could have been prevented with better software. It's not like the brakes failed. They just weren't applied.
> To get to that better software faster we should deploy the first 1000 cars asap.
This is where he is totally mad.
unknown|8 years ago
[deleted]
iClaudiusX|8 years ago
wdr1|8 years ago
Creating the perfect self-driving car, with redundant systems, safety everything & so on will certainly help its safety records.
But it will also drive up the cost.
And put it out of reach for a lot of people.
If the goal is to save lives, the bar self-driving cars should be held to is what humans do driving today, not perfection.
JohnJamesRambo|8 years ago
Wow there goes that "safer than human drivers" argument.
mabbo|8 years ago
Consider if there was a new lottery and you weren't sure what the odds of winning were. You play it three weeks in a row and the third time you win a million dollars. Conveniently, no one else tries the new lottery yet.
Does it follow then that the odds of winning a million dollars are 1 in 3? Or should you play it a few more times before you declare to all that one in three plays will make one a millionaire?
imh|8 years ago
gtm1260|8 years ago
corny|8 years ago
mozumder|8 years ago
A proper comparison in this case is comparing passenger car death rates.
And then you need to factor in other conditions, such as the fact that weather was clean, and that you should be looking to compare pedestrian/bicyclist deaths, and you see that this incident already throws out wack the death rate for autonomous vehicles.
username223|8 years ago
ben174|8 years ago
reedx8|8 years ago
unknown|8 years ago
[deleted]
akkat|8 years ago
vcanales|8 years ago
It's a pretty unfair comparison, with 1 death on one side and over 30k on the other...
buildbot|8 years ago
KKKKkkkk1|8 years ago
oldgradstudent|8 years ago
For example, unnecessarily stopping in a middle of a highway is extremely dangerous, especially if visibility is limited or roads are slippery.
sitkack|8 years ago
Looks like Uber has attracted Levandowski due to his cultural fit.
comex|8 years ago
etimberg|8 years ago
diggernet|8 years ago
matte_black|8 years ago
And then, if their code results in a death, they are liable and can have their license completely revoked, and they would be unable to work on self driving cars again.
superfrank|8 years ago
- If bad code makes it into production, that is a systemic failure not an individual one (Why didn't the bug get caught in code review, QA, etc.)
- No one is going to want to work on a project where a single failure can taint their career.
- What if I use a 3rd party lib and that is where the bug is. Who is at fault then? What if the code isn't buggy, but I'm using it in an unexpected way because of a miscommunication? If I am only allowed use code that I (or someone certified has written) development is going to move at a snails pace.
- What if I consult with an engineer who doesn't have a certification on a design decision and the failure is there, who is at fault?
- What if the best engineer on the project makes a mistake and ends up banned? Does he/she leave the project and take all their tribal knowledge with them, or are they still allowed to consult? If they can consult, what stops them from developing by proxy by telling other engineers what to write?
Not to be a dick, but this is an awful idea that would basically kill the self driving car.
harshbutfair|8 years ago
lhorie|8 years ago
PinguTS|8 years ago
mr_toad|8 years ago
namelost|8 years ago
If you want error-free software you need a blameless culture based around process, not individual ownership of code. It should not even be possible for an error to be one individual's mistake, because by the time it hits the road it should have gone through endless code review and testing cycles.
ubernostrum|8 years ago
Just like it relocated its testing to get a more "business friendly regulatory environment".
KKKKkkkk1|8 years ago
SamReidHughes|8 years ago
aylons|8 years ago
dmix|8 years ago
That'd be extremely foolish. And regardless of the dumb things the previous Uber CEO has done in the past and the big deal people are making over a $150 license, they have still hired some of the best engineers in the world.
You basically have to find the brightest-of-the-brightest to build AI... and Uber pays very well and puts plenty of effort into recruiting that talent.
Not to mention the massive PR and monetary risks that are inherent in killing people with your products. That would make any company highly risk-adverse.
telchar|8 years ago
sureaboutthis|8 years ago
1) They make it appear that Uber is a car manufacturer.
2) Even though Uber has not been determined to be at fault, the author seems to want to make it that way anyway.
TillE|8 years ago
I'd be fairly surprised if there's any real appetite at Uber to continue with this now. It was never anywhere near their core competency.
CydeWeys|8 years ago
"Indeed, it's entirely possible to imagine a self-driving car system that always follows the letter of the law—and hence never does anything that would lead to legal finding of fault—but is nevertheless way more dangerous than the average human driver. Indeed, such a system might behave a lot like Uber's cars do today."
It doesn't matter if Uber makes cars that are technically not at fault, if they're mowing over pedestrians at a rate significantly higher than human drivers then they should never be allowed on public roads. People mess up occasionally. The solution is not an instant death sentence administered by algorithm.
vamin|8 years ago
hndamien|8 years ago
They are on the road with conditions because what they are doing is somewhat experimental still. There is a safety driver for a reason that did not respond. A human driver may have collided but would have responded and potentially avoided a fatality (if not a collision). The benefits of autonomous driving completely failed on all counts in this case, which imply that being on a public road is far to early for Uber - suggesting some fault to lie with Uber or the regulators.
saas_co_de|8 years ago
It seems unlikely that the Police will find any fault because they probably don't want to have to file a criminal charge against the driver, but that is who it would go against if there was fault.
Tobba_|8 years ago
joejerryronnie|8 years ago
speedplane|8 years ago
If you really cared about safety, there are far more immediate and impactful solutions then spending billions on self-driving cars. If they came out and said that they were doing it to make money or make driving easier, it would have carried more weight. But you just can't trust a word this company says.
unknown|8 years ago
[deleted]
RcouF1uZ4gsC|8 years ago
* Flouted Taxi regulations
* Living in legal gray zones in regards to contractors vs employees
* Designed a system to avoid law enforcement
* Performed shady tactics with its competitors
* Illegally obtained the private medical records of a rape victim
* Created a workplace where sexual harassment was routine
* Illegally tested self-driving cars on public roads in California without obtaining the required state licenses.
* Possibly stole a LIDAR design from a competitor
Now their vehicle killed a pedestrian in a situation that the self driving vehicles should be much better than humans at (LIDAR can see in the dark, and the reaction time of a computer is much better than humans.)
Uber has exhausted their "benefit of the doubt" reserve. Maybe, they need to be made an example of with massive losses to investors and venture capitalists as an object lesson that ethics really do matter, and that bad ethics will eventually hurt your bank account.
dsfyu404ed|8 years ago
Self driving cars are currently in that state where they're always in accidents but never technically at fault. When individuals have this behavior patter their insurance company drops them because if they're so frequently present when shit hits the fan they're a time bomb from a risk perspective.
Edit: meant to reply to parent, oh well.
Stanleyc23|8 years ago
edit: wow this triggered some people. somehow 'if they are at fault they should be punished' got interpreted as 'they are not at fault and should not be punished'
throwaway010718|8 years ago
d--b|8 years ago
kristianov|8 years ago
oldgradstudent|8 years ago
Maybe testing of autonomous vehicles should be done off public roads (at least at this stage of development).
ghfbjdhhv|8 years ago
TylerE|8 years ago
icc97|8 years ago
icc97|8 years ago
bambax|8 years ago
The way it would work would be: the human is driving and the software is, at the same time, watching the driver and figuring out an action to take. Every time the driver's and the software's behavior differ, is logged and analyzed to figure out why there was a difference and who guessed better.
But the way testing is currently going on, it seems millions of miles are wasted where nothing happens and nothing is learned.
Animats|8 years ago
c06n|8 years ago
Yeah, that doesn't work though. Basically because you would need to have an excellent situation representation to really understand the drivers' reactions to outside events. But that does not exist.
Perception and situation representation are key to mastering the driving task, and they both differ greatly between humans and machines.
stouset|8 years ago
carlsborg|8 years ago
nasmorn|8 years ago
[deleted]
golemiprague|8 years ago
[deleted]
aaroninsf|8 years ago
"win-at-any-cost" and "second place is first looser" (sic) do not cohere with safety.