According to data obtained from the self-driving system, the system first registered radar and LIDAR
observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph.
As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian
as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.
At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver
was needed to mitigate a collision (see figure 2).
2 According to Uber, emergency braking maneuvers are
not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle
behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to
alert the operator.
I worked on the autonomous pod system at Heathrow airport[1]. We used a very conservative control methodology; essentially the vehicle would remain stopped unless it received a positive "GO" signal from multiple independent sensor and control systems. The loss of any "GO" signal would result in an emergency stop. It was very challenging to get those all of those "GO" indicators reliable enough to prevent false positives and constant emergency-braking.
The reason we were ultimately able to do this is because we were operating in a fully-segregated environment of our own design. We could be certain that every other vehicle in the system was something that should be fully under our control, so anything even slightly anomalous should be treated as a hazard situation.
There are a lot of limitations to this approach, but I'm confident that it could carry literally billions of passengers without a fatality. It is overwhelmingly safe.
Operating in a mixed environment is profoundly different. The control system logic is fully reversed: you must presume that it is safe to proceed unless a "STOP" signal received. And because the interpretation of image & LIDAR data is a rather... fuzzy... process, that "STOP" signal needs to have fairly liberal thresholds, otherwise your vehicle will not move.
Uber made a critical mistake in counting on a human-in-the-loop to suddenly take control of the vehicle (note: this is why Type 3 automation is something I'm very dubious about), but it's important to understand that if you want autonomous vehicles to move through mixed-mode environments at the speeds which humans drive, then it is absolutely necessary for them to take a fuzzy, probabilistic approach to safety. This will inevitably result in fatalities -- almost certainly fewer than when humans drive, but plenty of fatalities nonetheless. The design of the overall system is system is inherently unsafe.
Do you find this unacceptable? If so, then then ultimately the only way to address this is through changing the design of the streets and/or our rules about how they are be used. These are fundamentally infrastructural issues. Merely swapping out vehicle control systems -- robot vs. human -- will be less revolutionary than many expect.
That quote is the crux of it when you pair it with this other section: "In addition, the operator is responsible for monitoring diagnostic messages that appear on an interface in the center stack of the vehicle dash and tagging events of interest for subsequent review."
So you have a "driver" who has to be monitoring a diagnostic console, AND has to be separately watching for non-alerted emergency events to avoid a fatal crash? Why not hire two people? Good god.
That's not just one error but a whole book of errors, and that last bit combined with the reliance on the operator to take action is criminal. (And if it isn't it should be.)
I hope that whoever was responsible for this piece of crap software loses a lot of sleep over it, and that Uber will admit that they have no business building safety critical software. Idiots.
For 6 seconds the system had crucial information and failed to relay it, for 1.3 seconds the system knew an accident was going to happen and failed to act on that knowledge.
Drunk drivers suck, but this is much worse. This is the equivalent of plowing into a pedestrian with a vehicle while you're in full control of it because you are afraid that your perception of the world is so crappy that you will over-react to such situations often enough that the risk of killing someone you know is there is perceived as the lower one.
Not to mention all the errors in terms of process and oversight that allowed this p.o.s. software to be deployed in traffic.
Ok, so the AI is too panicky and will brake for no apparent reason so they have to disable that bit while they work on it. Fine.
But why the hell wouldn't you have the thing beep to alert the driver that the AI thinks there is a problem and they need to pay extra attention? In fact it seems like this would be helpful when trying to fine tune the system.
I understand that emergency maneuver system was disabled so the car did not brake between t minus 1.3 and t. But why didn't it brake from t minus 6 to t minus 1.3? Looks like it detected that the car's and object's paths were converging, so why didn't it brake during that interval?
>At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).
>The system is not designed to alert the operator.
>The vehicle operator intervened less than a second before impact by engaging the steering wheel.
>She had been monitoring the self-driving system interface
It seems like this was really aggravated by bad UX. Had the system alerted the user and the user had a big red "take whatever emergency action you think is best or stop ASAP if you don't know what to do" button to mash this could would have had a much better chance of being avoided.
Things coming onto the road unexpectedly isn't exactly an edge case when it comes to crash causing situations. I don't see why they wouldn't at least alert the user if the system detects a possible collision with an object coming from the side and the object is classified as one of a certain type (pedestrian, bike other vehicle, etc, no need to alert for things classified as plastic bags).
I don't see why they disabled the Volvo system. If they were setting up mannequins in a parking lot and teaching the AI to slalom around them I can see why that might be useful but I don't see why they would want to override the Volvo system when on the road. At the very least the cases where the systems disagree are useful for analysis.
So, in the case that emergency braking is, nothing is designed to happen and no one is informed. I guess they just hoped really hard that it wouldn't murder anyone?
> 1.3 seconds before impact ... emergency braking maneuver was needed ... not enabled ... to reduce the potential for erratic vehicle behavior
This wind-up toy killed a person.
Transport is a waking nightmare anyway. Every time you get in your car, every mile you drive, you're buying a ticket in a horrifying lottery. If you lose the lottery you reach your destination. If you win... blood, pain, death.
Into this we're setting loose these badly-programmed projections of our science-fiction.
- - - -
A sane "greenfield" transportation network would begin with three separate networks, one each for pedestrians, cyclists, and motor vehicles. (As long as I'm dreaming of sane urban infrastructure, let me sing the praises of C. Alexander's "Pattern Language" et. al., and specifically the "Alternating Fingers of City and Country" pattern!)
My mom has dementia and is losing her mind. We don't trust her to take the bus across town anymore, and she hasn't driven in years. If I wanted an auto-auto[1] to take her places safely I could build that today. It would be limited to about three miles an hour with a big ol' smiley sign on the back saying "Go Around Asshole" in nicer language. Obviously, you would restrict it to routes that didn't gum up major roads. It would be approximately an electric scooter wrapped in safety mechanisms and encased in a carbon fiber monocoque hull. I can't recall the name now but there's a way to set up impact dampers so that if the hull is hit most of the kinetic energy is absorbed into flywheels (as opposed to bouncing the occupant around like a rag doll or hitting them with explosive pillows.) This machine would pick its way across the city like "an old man crossing a river in winter." Its maximum speed would at all times be set by the braking distance to any possible obstacle.
[1] I maintain that "auto-auto" is the obviously cromulent name for self-driving automobiles, and will henceforth use the term unabashedly.
"According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior."
If you can't give your "self-driving software" full access to the brakes because it becomes an "erratic driver" when you do that, you do not have self-driving software. You just have some software that is controlling a car that you know is an inadequate driver. If the self-driving software is not fully capable of replacing the driver in the car you have placed it in, as shipped except for the modifications necessary to be driven by software, you do not have a safe driving system.
So, can't read the article but I did read the ntsb report directly. Basically, it sounds like Uber was not ready, but reassured themselves with "but there's a human driver who can intervene". The fact is, humans are very bad at remaining vigilant for long periods of doing nothing, then needing to intervene at a moments notice. Computers are good at that (and Volvo's built-in safety systems might have worked if Uber had not disabled them), but humans are bad at it.
Volvo has it right: human driver, computer backup. Uber's idea of a human acting as a last-second backup to a computer gets the relative strengths and weaknesses of each exactly wrong.
I'm irritated at myself for swallowing Uber's initial line on this (and the interpretation of the officers who reviewed the cam footage provided by Uber) without sufficient critique.
This accident should have been avoided. No excuses.
The sensors observed the pedestrian 6 seconds before impact! That's more than enough time to come to a complete stop.
That's enough time to play a bell that alerts the driver and for the driver to manually react, press the brakes, and come to a complete stop.
And this was a pretty easily preventable scenario (which just makes it more tragic of course). Software is nowhere near ready to drive cars on real roads.
The initial coverage was terrible and missed the point completely. So much "it was dark" or "the ped was at fault". I was disappointed, though not surprised.
Does anyone else find these two quotes a bit hard to stomach?
> Although toxicological specimens were not collected from the vehicle operator, responding officers from the Tempe Police Department stated that the vehicle operator showed no signs of impairment at the time of the crash.
> Toxicology test results for the pedestrian were positive for methamphetamine and marijuana.
So they tested the victim for drugs but not the Uber employee in the car??
Other than that, Uber's so-called "self-driving" system sounds like crap and should never have been allowed to be used in that state.
The test is probably a routine part of an autopsy, whereas it likely wasn’t a routine part of the police response (which use visual and verbal cues to preliminarily assess inebriation).
I could be totally wrong, but I thought I read the emergency braking function was Volvo’s, and built into the base vehicle. Uber had disabled because they were testing their own software.
So assuming the driver knows all of this (and that's a big assumption), then you have the blame shared two ways, and it's hard to tell who deserves more.
(1) You'd have the driver being at fault, since they were responsible for controlling the vehicle at the time, even though the computer was doing the majority of the driving. In this case, the driver should not have been using their phone and ignoring the road and their duties to control the car.
(2) Uber should share some blame for not building alerts to the driver into the system.
But how much of these responsibilities Uber made clear to the driver is very much worth knowing, because however you slice it this was not so much a failure of technology as human negligence.
If Volvo's emergency braking were doing that many false positives, then it would be a problem when humans are driving its cars, and I am pretty sure that is not the case, or else the NHTSA, and its equivalents in other countries, would be investigating ordinary Volvos with this feature.
If the claim is that Volvo's system is intervening in valid cases where Uber's system would (arguably) have handled it, then Uber's system is driving too aggressively or is too slow in responding. Humans, when paying attention, can drive Volvo's cars without often triggering the emergency braking.
The detection at 6 seconds was just of an object though, not an object moving in to the car's path. You couldn't drive a car if you had to constantly break because objects (such as people standing by the road) were being detected.
It's not clear at what point the car ascertained a collision would occur between detection 6s before and the determination that emergency breaking was necessary 1.3 seconds before.
Was there any other determination in between, and when? What I'd like to see is Uber's modelling of the woman's trajectory and the likeliness of collision across the 6 second window. That's completely left unsaid.
The average braking distance of a car is about 24m at 40mph, which is approximately the distance between the woman and the car at 1.3 seconds out. So perhaps the 1.3s figure wasn't the first moment the car determined a brake was necessary, but rather, the last moment the car could have braked to prevent a substantial collision. I want to know the first moment the car determined a brake was necessary at all. It's likely not 6s, but it's also likely not 1.3 seconds. It seems this was entirely preventable, or at least the collision impact could have been mitigated severely, had there been a braking and/or warning system in place.
Shutting off brakes on literally the only driving agent tasked with full attention is inexcusable. But that's what they did. To me that's murder. They used to have two passengers, one for tagging circumstantial data, the other to override the car when necessary and keep eyes on the road at all times. Either keep that and shut off emergency brakes from the car and put a warning system in place for the 'driver'. Or do not shut off emergency brakes. Instead they put a single person in the car, tasked to do things that kept her eyes off the road half of the time, and shut off brakes for the AI. That's insane.
Nice little FUD by WSJ in their subheadline: "Pedestrian tested positive for methamphetamine and marijuana" -- not referred to again in the article, moreover I have trouble seeing the relevance to the accident.
It is mentioned again at the end of the article. It's relevant to the accident because it provides an explanation for the pedestrian crossing the road outside a crosswalk without adequately checking for traffic.
Does this mean the technology is so early that they are struggling to program it to do the right thing in normal conditions, let alone to prevent an accident?
It sure suggests the possibility of, "that thing keeps braking when we don't want it to, turn it off". When, you know, human drivers manage to do quite fine with it on. If you have to disable the built-in safety features of the Volvo to get your driving software to work, then you're not ready to do a road test.
Does this mean the problem is more complicated and nuanced than we had originally assumed so that they are constantly going to struggle to program it to even barely match the performance of a human being?
I'm still disappointed that the system has to use underlying maps to know where lanes are and what the speed limits are in them. What happens when the map is less than 100% perfect?
Isn't the person actually driving the car supposed to be driving the car?
I've ridden in these self driving Ubers. When I rode in one, the driver drove almost the entire time, except on a few straight stretches of road. They always had their hands ready to grab the wheel, were always attending to what was happening etc.
It seems like the marketing and the engineering got crossed here. Marketing says these were self driving, but anybody who rode in them knew they weren't. They were supposed to be getting driven by real drivers. From the report, it sounds like the drivers were listening to the marketing instead of the engineering team (who presumably would have told them that the system doesn't brake on its own).
It sounds like the real driver wasn't driving as they were supposed to be. From the video, it looked like they were reading something on their phone[1] instead of driving the car.
Compare this to a pilot flying in an autopilot. They don't shut their radios off and stop paying attention to the flight, they still fly the airplane and remain attentive to what is happening with it. That's what this driver should have been doing, not looking at their phone.
It frustrates me that this level of negligence could set self driving tech, something that will save countless lives, back. This was the Chernobyl moment for self driving tech. It's safer than alternatives, but now this is all people are associating it with.
[1]:the driver states that they were interacting with the Uber self driving system, not their phone.
The driver has to look down on a console to see warnings like this AND drive the car? This and that the emergency system was off tells me that the accident was 100 % the fault of Uber even if the "driver" were dancing in the backseat.
Yet another reason in favour of professional engineers being required to design and implement safety-critical features using software: because "move fast, and break things" becomes unacceptable when those "things" are the lives of living, breathing people.
Yikes, it had not occurred to me that even Uber would ignore their automation system's pleas to emergency brake because it was making their cars look too jumpy.
The one thing I spent years teaching my wife is that when there’s anything untoward on the highway, JUST STOP. I’ve avoided at least two major accidents by stopping instead of swerving — in a few feet you can get your speed down to levels where a crash won’t be fatal or severely injurious. And even if you get rear ended by stopping short, it’s usually at a much lower speed. All autonomous cars should just STOP when something is not right.
But how much work the software does is not what makes it remarkable. What makes it remarkable is how well the software works. This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program — each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.
Also from the article: “If the software isn’t perfect, some of the people we go to meetings with might die."
[+] [-] JorgeGT|7 years ago|reply
According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2). 2 According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
[+] [-] nkoren|7 years ago|reply
The reason we were ultimately able to do this is because we were operating in a fully-segregated environment of our own design. We could be certain that every other vehicle in the system was something that should be fully under our control, so anything even slightly anomalous should be treated as a hazard situation.
There are a lot of limitations to this approach, but I'm confident that it could carry literally billions of passengers without a fatality. It is overwhelmingly safe.
Operating in a mixed environment is profoundly different. The control system logic is fully reversed: you must presume that it is safe to proceed unless a "STOP" signal received. And because the interpretation of image & LIDAR data is a rather... fuzzy... process, that "STOP" signal needs to have fairly liberal thresholds, otherwise your vehicle will not move.
Uber made a critical mistake in counting on a human-in-the-loop to suddenly take control of the vehicle (note: this is why Type 3 automation is something I'm very dubious about), but it's important to understand that if you want autonomous vehicles to move through mixed-mode environments at the speeds which humans drive, then it is absolutely necessary for them to take a fuzzy, probabilistic approach to safety. This will inevitably result in fatalities -- almost certainly fewer than when humans drive, but plenty of fatalities nonetheless. The design of the overall system is system is inherently unsafe.
Do you find this unacceptable? If so, then then ultimately the only way to address this is through changing the design of the streets and/or our rules about how they are be used. These are fundamentally infrastructural issues. Merely swapping out vehicle control systems -- robot vs. human -- will be less revolutionary than many expect.
1: http://www.ultraglobalprt.com/
[+] [-] RyanCavanaugh|7 years ago|reply
So you have a "driver" who has to be monitoring a diagnostic console, AND has to be separately watching for non-alerted emergency events to avoid a fatal crash? Why not hire two people? Good god.
[+] [-] jacquesm|7 years ago|reply
I hope that whoever was responsible for this piece of crap software loses a lot of sleep over it, and that Uber will admit that they have no business building safety critical software. Idiots.
For 6 seconds the system had crucial information and failed to relay it, for 1.3 seconds the system knew an accident was going to happen and failed to act on that knowledge.
Drunk drivers suck, but this is much worse. This is the equivalent of plowing into a pedestrian with a vehicle while you're in full control of it because you are afraid that your perception of the world is so crappy that you will over-react to such situations often enough that the risk of killing someone you know is there is perceived as the lower one.
Not to mention all the errors in terms of process and oversight that allowed this p.o.s. software to be deployed in traffic.
[+] [-] jandrese|7 years ago|reply
But why the hell wouldn't you have the thing beep to alert the driver that the AI thinks there is a problem and they need to pay extra attention? In fact it seems like this would be helpful when trying to fine tune the system.
[+] [-] ZuLuuuuuu|7 years ago|reply
[+] [-] dsfyu404ed|7 years ago|reply
>The system is not designed to alert the operator.
>The vehicle operator intervened less than a second before impact by engaging the steering wheel.
>She had been monitoring the self-driving system interface
It seems like this was really aggravated by bad UX. Had the system alerted the user and the user had a big red "take whatever emergency action you think is best or stop ASAP if you don't know what to do" button to mash this could would have had a much better chance of being avoided.
Things coming onto the road unexpectedly isn't exactly an edge case when it comes to crash causing situations. I don't see why they wouldn't at least alert the user if the system detects a possible collision with an object coming from the side and the object is classified as one of a certain type (pedestrian, bike other vehicle, etc, no need to alert for things classified as plastic bags).
I don't see why they disabled the Volvo system. If they were setting up mannequins in a parking lot and teaching the AI to slalom around them I can see why that might be useful but I don't see why they would want to override the Volvo system when on the road. At the very least the cases where the systems disagree are useful for analysis.
[+] [-] emodendroket|7 years ago|reply
[+] [-] dang|7 years ago|reply
[+] [-] exelius|7 years ago|reply
[+] [-] beamatronic|7 years ago|reply
[+] [-] jackconnor|7 years ago|reply
[+] [-] bytematic|7 years ago|reply
[+] [-] sailingparrot|7 years ago|reply
I hope this won't stay unpunished (both at corporate and personal level) if confirmed.
[+] [-] eternalban|7 years ago|reply
[+] [-] carapace|7 years ago|reply
> 1.3 seconds before impact ... emergency braking maneuver was needed ... not enabled ... to reduce the potential for erratic vehicle behavior
This wind-up toy killed a person.
Transport is a waking nightmare anyway. Every time you get in your car, every mile you drive, you're buying a ticket in a horrifying lottery. If you lose the lottery you reach your destination. If you win... blood, pain, death.
Into this we're setting loose these badly-programmed projections of our science-fiction.
- - - -
A sane "greenfield" transportation network would begin with three separate networks, one each for pedestrians, cyclists, and motor vehicles. (As long as I'm dreaming of sane urban infrastructure, let me sing the praises of C. Alexander's "Pattern Language" et. al., and specifically the "Alternating Fingers of City and Country" pattern!)
My mom has dementia and is losing her mind. We don't trust her to take the bus across town anymore, and she hasn't driven in years. If I wanted an auto-auto[1] to take her places safely I could build that today. It would be limited to about three miles an hour with a big ol' smiley sign on the back saying "Go Around Asshole" in nicer language. Obviously, you would restrict it to routes that didn't gum up major roads. It would be approximately an electric scooter wrapped in safety mechanisms and encased in a carbon fiber monocoque hull. I can't recall the name now but there's a way to set up impact dampers so that if the hull is hit most of the kinetic energy is absorbed into flywheels (as opposed to bouncing the occupant around like a rag doll or hitting them with explosive pillows.) This machine would pick its way across the city like "an old man crossing a river in winter." Its maximum speed would at all times be set by the braking distance to any possible obstacle.
[1] I maintain that "auto-auto" is the obviously cromulent name for self-driving automobiles, and will henceforth use the term unabashedly.
[+] [-] jerf|7 years ago|reply
If you can't give your "self-driving software" full access to the brakes because it becomes an "erratic driver" when you do that, you do not have self-driving software. You just have some software that is controlling a car that you know is an inadequate driver. If the self-driving software is not fully capable of replacing the driver in the car you have placed it in, as shipped except for the modifications necessary to be driven by software, you do not have a safe driving system.
[+] [-] rossdavidh|7 years ago|reply
Volvo has it right: human driver, computer backup. Uber's idea of a human acting as a last-second backup to a computer gets the relative strengths and weaknesses of each exactly wrong.
[+] [-] fixermark|7 years ago|reply
This accident should have been avoided. No excuses.
[+] [-] bo1024|7 years ago|reply
That's enough time to play a bell that alerts the driver and for the driver to manually react, press the brakes, and come to a complete stop.
And this was a pretty easily preventable scenario (which just makes it more tragic of course). Software is nowhere near ready to drive cars on real roads.
[+] [-] caconym_|7 years ago|reply
[+] [-] remote_phone|7 years ago|reply
[deleted]
[+] [-] FartyMcFarter|7 years ago|reply
> Although toxicological specimens were not collected from the vehicle operator, responding officers from the Tempe Police Department stated that the vehicle operator showed no signs of impairment at the time of the crash.
> Toxicology test results for the pedestrian were positive for methamphetamine and marijuana.
So they tested the victim for drugs but not the Uber employee in the car??
Other than that, Uber's so-called "self-driving" system sounds like crap and should never have been allowed to be used in that state.
[+] [-] beisner|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] pavlov|7 years ago|reply
Maybe it reports so many false positives that Uber turned those off and just collects the data to improve the algorithm?
[+] [-] SlowRobotAhead|7 years ago|reply
[+] [-] rando444|7 years ago|reply
So assuming the driver knows all of this (and that's a big assumption), then you have the blame shared two ways, and it's hard to tell who deserves more.
(1) You'd have the driver being at fault, since they were responsible for controlling the vehicle at the time, even though the computer was doing the majority of the driving. In this case, the driver should not have been using their phone and ignoring the road and their duties to control the car.
(2) Uber should share some blame for not building alerts to the driver into the system.
But how much of these responsibilities Uber made clear to the driver is very much worth knowing, because however you slice it this was not so much a failure of technology as human negligence.
[+] [-] mannykannot|7 years ago|reply
If the claim is that Volvo's system is intervening in valid cases where Uber's system would (arguably) have handled it, then Uber's system is driving too aggressively or is too slow in responding. Humans, when paying attention, can drive Volvo's cars without often triggering the emergency braking.
[+] [-] turtlebits|7 years ago|reply
But then again these systems also make people lazy when the vehicle drives perfectly 99% of the time...
[+] [-] antihero|7 years ago|reply
[+] [-] sschueller|7 years ago|reply
[+] [-] IkmoIkmo|7 years ago|reply
It's not clear at what point the car ascertained a collision would occur between detection 6s before and the determination that emergency breaking was necessary 1.3 seconds before.
Was there any other determination in between, and when? What I'd like to see is Uber's modelling of the woman's trajectory and the likeliness of collision across the 6 second window. That's completely left unsaid.
The average braking distance of a car is about 24m at 40mph, which is approximately the distance between the woman and the car at 1.3 seconds out. So perhaps the 1.3s figure wasn't the first moment the car determined a brake was necessary, but rather, the last moment the car could have braked to prevent a substantial collision. I want to know the first moment the car determined a brake was necessary at all. It's likely not 6s, but it's also likely not 1.3 seconds. It seems this was entirely preventable, or at least the collision impact could have been mitigated severely, had there been a braking and/or warning system in place.
Shutting off brakes on literally the only driving agent tasked with full attention is inexcusable. But that's what they did. To me that's murder. They used to have two passengers, one for tagging circumstantial data, the other to override the car when necessary and keep eyes on the road at all times. Either keep that and shut off emergency brakes from the car and put a warning system in place for the 'driver'. Or do not shut off emergency brakes. Instead they put a single person in the car, tasked to do things that kept her eyes off the road half of the time, and shut off brakes for the AI. That's insane.
[+] [-] bb101|7 years ago|reply
[+] [-] Zak|7 years ago|reply
[+] [-] djsumdog|7 years ago|reply
I guess they didn't have to include it, but it's in the report.
[+] [-] sly010|7 years ago|reply
[+] [-] rossdavidh|7 years ago|reply
[+] [-] akira2501|7 years ago|reply
I'm still disappointed that the system has to use underlying maps to know where lanes are and what the speed limits are in them. What happens when the map is less than 100% perfect?
[+] [-] blhack|7 years ago|reply
I've ridden in these self driving Ubers. When I rode in one, the driver drove almost the entire time, except on a few straight stretches of road. They always had their hands ready to grab the wheel, were always attending to what was happening etc.
It seems like the marketing and the engineering got crossed here. Marketing says these were self driving, but anybody who rode in them knew they weren't. They were supposed to be getting driven by real drivers. From the report, it sounds like the drivers were listening to the marketing instead of the engineering team (who presumably would have told them that the system doesn't brake on its own).
It sounds like the real driver wasn't driving as they were supposed to be. From the video, it looked like they were reading something on their phone[1] instead of driving the car.
Compare this to a pilot flying in an autopilot. They don't shut their radios off and stop paying attention to the flight, they still fly the airplane and remain attentive to what is happening with it. That's what this driver should have been doing, not looking at their phone.
It frustrates me that this level of negligence could set self driving tech, something that will save countless lives, back. This was the Chernobyl moment for self driving tech. It's safer than alternatives, but now this is all people are associating it with.
[1]:the driver states that they were interacting with the Uber self driving system, not their phone.
[+] [-] reacharavindh|7 years ago|reply
[+] [-] TheForumTroll|7 years ago|reply
The driver has to look down on a console to see warnings like this AND drive the car? This and that the emergency system was off tells me that the accident was 100 % the fault of Uber even if the "driver" were dancing in the backseat.
[+] [-] tareqak|7 years ago|reply
[+] [-] gok|7 years ago|reply
[+] [-] ggg9990|7 years ago|reply
[+] [-] cmurf|7 years ago|reply
https://ec.europa.eu/transport/road_safety/specialist/knowle...
[+] [-] kbos87|7 years ago|reply
[+] [-] mLuby|7 years ago|reply
[+] [-] IncRnd|7 years ago|reply
But how much work the software does is not what makes it remarkable. What makes it remarkable is how well the software works. This software never crashes. It never needs to be re-booted. This software is bug-free. It is perfect, as perfect as human beings have achieved. Consider these stats : the last three versions of the program — each 420,000 lines long-had just one error each. The last 11 versions of this software had a total of 17 errors. Commercial programs of equivalent complexity would have 5,000 errors.
Also from the article: “If the software isn’t perfect, some of the people we go to meetings with might die."
[+] [-] booleandilemma|7 years ago|reply