top | item 16643056

Tempe Police Release Video of Uber Accident

1108 points| austinkhale | 8 years ago |twitter.com

1323 comments

order
[+] atonse|8 years ago|reply
How did LIDAR and IR not catch that? That seems like a pretty serious problem.

It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

When I argue for automated driving (as a casual observer), I tell people about exactly this sort of stuff (a computer can look in 20 places at the same time, a human can't. a computer can see in the dark, a human can't).

Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

[+] jackpirate|8 years ago|reply
It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

That's not at all clear to me. I don't know too much about cameras, but it looks to me like the camera is making the scene appear much darker than it actually is.

In the video, you can see many street lights projecting down onto the ground, and the person was walking the the gap between two streetlights. The gap between street lights (and hence the person) was in the field of view of the camera the entire time; they just weren't "visible" in the camera because of the low lighting. I'm confident my eyes are good enough that I would have been able to see this person at night in these lighting conditions. (Whether I could have reacted in time is another question.) It seems to me like the camera just doesn't have the dynamic range needed for driving in these low light conditions, which is a major problem.

[+] Animats|8 years ago|reply
How did LIDAR and IR (?) not catch that? That seems like a pretty serious problem.

Something is badly wrong there. That should have been detected by LIDAR, radar, and vision. Yes, they need a wide dynamic range camera for night driving, but such things exist.[1][2] They're available as low-end dashcams; it's not expensive military night vision technology.

Radar should pick up a bicycle at that range. The old Eaton VORAD from about 2000 couldn't, but there's been progress since then.

LIDAR has its limitations; some materials, including the charcoal black fabric used on some desk chairs, are almost nonreflective to LIDAR. But blue jeans, red bike, bare head? Expect solid returns from all of those.

The video shows no indication of braking in advance of the collision. That's very bad. There simply is no excuse for this situation not being handled. The NTSB is looking into this, and they should. I hope the NTSB is able to pry detailed technical data out of Uber and explain exactly what happened. In the first Tesla fatal crash, they didn't get deeply into the software and hardware, because it was clear that the system was behaving as designed, unable to detect a solid tractor trailer crossing in front of the Tesla. The result of that investigation was that Tesla had to get serious about detecting driver inattention, like all the other carmakers with lane keeping and autobrake do.

This time it's a level 4 vehicle, which is supposed to be able to detect any road hazard. The NTSB has the job of figuring out what went wrong, in detail, the way they do for air crashes.

Again, there is no excuse for this.

[1] https://youtu.be/gWqzJF9tOhw?t=211 [2] https://www.youtube.com/watch?v=as12rjzCQnY

[+] 51Cards|8 years ago|reply
This is exactly it. I see people mentioning seeing the victim at the last second but these vehicles are supposed to be better. They scan in non visible spectrums with LIDAR. Lack of safety vest, lack of headlights, none of it is supposed to matter... or at least it shouldn't completely compromise the vehicle's systems. Camera's may not work as well but an obstacle directly in the path should still be detected. Especially an obstacle that would reflect LIDAR and give off a very obvious infrared signature.

This video also shows another point I made recently in a conversation. People need stimulus to keep them alert and focused. I don't think it's at all reasonable to expect someone to sit idly with almost no interaction or responsibility and expect them to stay alert. The human brain doesn't function that way.

[+] mirimir|8 years ago|reply
Indeed! To LIDAR, she was basically standing in the lane, with a bulky bicycle. To visible light, including the driver, who was apparently half asleep or watching the dashboard, she was in shadow until just before the collision.

So yes, LIDAR should have caught this. Easily. So something was clearly misconfigured. And even if the driver had been carefully watching the road, he probably wouldn't have seen her in time.

But I wonder, is there a LIDAR view on the dashboard?

[+] jey|8 years ago|reply
> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

Seems more likely that it's a software problem. Especially given the rest of Uber's behavior, I wouldn't be surprised if they're aggressively shipping incomplete/buggy software in the name of catching up to more careful competitors like Waymo.

[+] skywhopper|8 years ago|reply
The video here is misleading. A human driver has a much wider field of view and better low-light vision than this video renders the situation. That's not to say that this would have been prevented by an attentive driver. But it's also clear that the safety driver was not paying attention, so it's even harder to know.
[+] ahelwer|8 years ago|reply
It isn't at all clear from the video. Video cameras of this quality vastly underperform human vision in low-light conditions.
[+] YeGoblynQueenne|8 years ago|reply
>> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

A human is not an "obstruction", dammit. I mean, literally- it's not like hitting a wall. The driver's life will never be in danger and the care may not even be significantly damaged. There's a very special reason why we want self-driving cars to avoid humans, that has nothing to do with the reason we want to avoid obstacles. And because this special reason is very, very special indeed, we need much better guarantees that self-driving vehicle AI is extremely good at avoiding collisions with humans, than we do for anything else.

[+] gok|8 years ago|reply
In pitch darkness, IR cameras can only see more than a visual light camera if you somehow had IR headlights that were more powerful than the visual headlights (they probably don't; that would blind other AVs just like high beams blind other drivers). It doesn't grant you the ability to see in the dark with infinite range. Lidar can sense shapes with more range, but at the cost of dramatically worse resolution and latency. It's conceivable the radar/lidar sensor caught the person in the left lane with a bike and decided that was a reasonable place for a person with a bike to be, then lost track of the person while she walked into the right lane (where the visual/IR system could not see her yet).

It's also entirely possible there was an egregious bug. This video doesn't really tell us much.

[+] vicpara|8 years ago|reply
The lady was walking the bike at the edge of a light cone. The inner cam was actually filming in IR. Uber cars have IR cameras, LIDAR and an also multiple radar sensors.

My guess is that the algorithms have never met a person crossing the street with a bicycle during night time so they just ignored it or considered it to be a glitch.

You can have to approaches regarding labeling driving situations. Either you label with positive tags the situations where the car needs to react. Or you label with positive tags the normal situations when the car does nothing.

Depending on the two approaches you can have a car that kills pedestrians that appear in weird circumstances. I also bet a pedestrian that ducks in the middle of a lane would 100% be killed by a car. Or two people having sex while standing in the middle of a lane.

The other situation you have cars avoiding invisible obstacles that may appear due to some aberrations from sensors (which are far from perfect).

[+] drawkbox|8 years ago|reply
Arizona and Tempe especially have lots of darker roads. The LIDAR/computer vision tuning at night doesn't seem right. Maybe it was adjusting for the changing street light brightness but yes this is one situation and the vehicle had just come from the Mill bridge that has festive lights that are strung along the bridge [1].

This is a situation though where the LIDAR should have clearly been better than it was. Maybe it was in a strange state after having seen all the lights and then complete darkness, looks like they were headed north on Mill Ave over the bridge [2] just past the 202 where it is indeed very dark at night and probably the spot right here [3] which matches up with the building in the background, the other way is South and is busy/urban by ASU. They had just crossed a lit up bridge, then dark underpass, then into this area [3]. The area that it happened in [3] does have bike lanes, sidewalks and a crossing sidewalk close by [4] but is by a turn out so not a legal crossing however there are lots of trails through there.

This video is worse than expected by far and may be forever harmful to the Uber brand in terms of software.

In AZ I usually see the self-driving cars out in the day, maybe there is lots of night tuning/work to do yet.

[1] https://i.imgur.com/kwxjW36.jpg

[2] https://goo.gl/maps/ey1RA47tKBJ2

[3] https://goo.gl/maps/gpugzAZKxcS2

[4] https://goo.gl/maps/Ni18GfjMP962

[+] samdoidge|8 years ago|reply
The pedestrian is a lot more obvious to the eye than I suspected, and it's actually quite shocking. They are correct to stop all road tests until they have investigated why they are missing this.
[+] munk-a|8 years ago|reply
It strikes me as extremely disingenuous if this is all Uber gave to the police. They should be making as much raw data as possible available. At the very least it'd let other companies test their AIs against the scenario and see if they would catch sight of and be able to avoid the pedestrian, if not then this is one more data point to train them on so it doesn't happen again.
[+] martin_bech|8 years ago|reply
Im conviced 99-100% that the Automatic Emergency Brakeing (AEB) on my Tesla would have braked for that. The promise of these systems, as you also point out, is that they can see things, humans cant. The "real" cameras on this car (not this dashcam footage), and the LIDAR, should be fine with it beeing near pitch black.
[+] XR0CSWV3h3kZWg|8 years ago|reply
Yeah releasing the sensory data beyond human visible spectrum would be way more informative about if a better designed AV would have dealt with this better.

I'm glad it was not me driving down that road that night, I don't think I could have prevented it.

[+] CodeWriter23|8 years ago|reply
It's pretty clear to me (from the second half of the video) the driver was looking down at her phone and glancing up at the road periodically. IMO if she had been focusing on the road, she would have at least started braking before hitting the pedestrian. Or perhaps actually stopped before that happened.
[+] donpdonp|8 years ago|reply
I agree thats a textbook case for the non-visual-specrum sensors. Its possible that lidar DID catch it, but the avoidance logic decided to continue forward. For example if it decided a collision was immpossible to avoid, swerving might make things worse. Also its possible the logic thought the timing was such that the bike would pass after the car crossed where the bike was going, so slowing down would actually cause a collision.
[+] osrec|8 years ago|reply
The lady can be seen fairly clearly (even in this poor quality video) at 0.03 and impact occurs at 0.04. That's 1s, which means a distance of approx 17m. If the guy was watching (sort of the point of him being there really) he could have slowed the car significantly and probably even stopped it. These are test vehicles being treated like prod vehicles. They should probably not be on streets with pedestrians quite so soon.
[+] manmal|8 years ago|reply
Regarding how the LIDAR did not catch that, there are 4 possibilities I can think of:

1. A software bug failed to recognize the obstacles, or misclassified them, or it fell below some probability threshold.

2. LIDAR didn’t work at the time, and the car did not shutdown.

3. The victim‘s clothing absorbs the LIDAR‘s wavelength pretty much completely, such that it appeared as a „black hole“ and was ignored by the algorithm since this occurs commonly. Unlikely though since the bike itself would surely have registered?

4. It’s hard to see on the video, but is the car going up a slope? In that case, if the LIDAR didn‘t look up far enough, it could have failed to see the victim for optical reasons.

[+] scythe|8 years ago|reply
It seems like this perspective may come from the idea that processing camera input is a formality. But the best estimates of the practical computing power of the brain are based on its visual processing capacity, because we know that's a hard problem. CAPTCHAs all depend on humans' ability to process images semantically faster than a computer (spambot). While it probably isn't unsolvable, I don't think it's surprising that this is consistently a challenge.
[+] danso|8 years ago|reply
> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

It's a bit too early to make that conclusion. For all we know, the equipment was malfunctioning. Which I guess technically leads to your point, but we'll have to wait for the investigation to actually know what failed vs. what met expectations (I worry that expectations and tolerances, as set by the car companies, will be revealed to not be as comfortable as we might assume).

[+] vkou|8 years ago|reply
> It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

This was taken by a video camera - which has a much lower range of detectable brightness then the human eye. The pitch-black spots in the video are almost certainly not pitch-black if you were to look at them.

[+] IncRnd|8 years ago|reply
> It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

Actually a human driver would be expected to have less visual trouble in this case. People's eyes are far more adaptable to low light conditions than a camera's video. If you've ever tried to take a picture on a visible night using your phone, you've seen this effect.

> When I argue for automated driving (as a casual observer), I tell people about exactly this sort of stuff (a computer can look in 20 places at the same time, a human can't. a computer can see in the dark, a human can't).

Except that the computer did not do that in this case. This car also uses LIDAR and should have noticed the pedestrian long before the accident occurred.

> Yet this crash proves that all the equipment in the world didn't catch a very obvious obstruction.

Either the sensor equipment or the software was defective, otherwise the pedestrian would have been detected.

[+] coding123|8 years ago|reply
I'd like to see an Uber SDV drive on Waymo's test tracks (where they have the employees pop all the shit at it). And just see what it does. I'm guessing it will be ridiculous and nightmarish.
[+] vadimberman|8 years ago|reply
In addition to that, even if we were limited to the "last moment", there was about half a second or a second time to react. Correct me if I'm wrong, but that should be enough for the car to at least try something.

Isn't the car supposed to brake to minimise the collision, if the swerving is too dangerous (and it wasn't in this case, as the road wasn't too busy)?

[+] jonathanstrange|8 years ago|reply
The human driver appears to be reading. I know that's a pretty hefty accusation, but I can't shake off the impression.
[+] arduinomancer|8 years ago|reply
I would think the car would have been able to at least do something in this situation? It looks like it didn't react at all.
[+] sv12l|8 years ago|reply
> It's clear from the video that a human driver actually would've had more trouble since the pedestrian showed up in the field of view right before the collision, yet that's in the visible spectrum.

I'm not sure, pls look that pic https://imgur.com/a/VfBck, you can clearly see there exists at least 10-15 meters b/w them right at the time when she pops up. Now I don't know the speed of the car, but I'd wager, a human driver (if s/he was alert) would have attempted a breaking at that moment.

[+] aecs99|8 years ago|reply
I currently work full-time in the self-driving vehicle industry. I am part of a team that builds perception algorithms for autonomous navigation. I have been working exclusively with LiDAR systems for over 1.5 years.

Like a lot of folks here, my first question was: "How did the LiDAR not spot this?". I have been extremely interested in this and kept observing images and videos from Uber to understand what could be the issue.

To reliably sense a moving object is a challenging task. To understand/perceive that object (i.e., shape, size, classification, position estimate, etc.) is even more challenging. Take a look at this video (set the playback speed to 0.25): https://youtu.be/WCkkhlxYNwE?t=191

Observe the pedestrian on the sidewalk to the left. And keep a close eye on the laptop screen (held by the passenger on right) at the bottom right. Observe these two locations by moving back and forth +/- 3 seconds. You'll notice that the height of the pedestrian varies quite a bit.

This variation in pedestrian height and bounding box happens at different locations within the same video. For example, at 3:45 mark, the height of human on right wearing brown hoodie, keeps varying. At 2:04 mark, the bounding box estimate for pedestrian on right side appears to be unreliable. At 1:39 mark, the estimate for the blue (Chrysler?) car turning right jumps quite a bit.

This makes me believe that their perception software isn't as robust to handle the exact scenario in which the accident occurred in Tempe, AZ.

I think we'll know more technical details in the upcoming days/weeks. These are merely my observations.

[+] chrsstrm|8 years ago|reply
The description given before the video was released painted a picture in my mind that the woman was on the median and "suddenly" entered the roadway in front of the vehicle. I pictured someone darting across the road directly in front of the car, with no way to stop in time.

This video shows a completely different scenario. The woman started on the median, but the vehicle was in the #2 lane. She wasn't visible to the naked eye but she also wasn't darting into traffic and had to cross the #1 lane before even being in the path of the vehicle. A human driver certainly would have difficulty stopping in time, but why did the sensor package not pick her up? This doesn't appear to be the close call we were told it was. To me, this seems like exactly the scenario that autonomous driving vehicles are intended to prevent.

[+] cameldrv|8 years ago|reply
Pathetic and sad performance by the vehicle and "safety" driver. The woman does not "appear out of nowhere", she was in the roadway for some time. The woman was not wearing all black, had red hair, and her shoes were reflective. Even if we are to believe that their camera is this crappy, they still have the lidar, and it appears brakes were not applied. Even 500ms of braking * 0.8g = 9mph. That might have saved her life. Ultimately if Ubers car cannot see a pedestrian crossing the street at a walking pace at night and not hit him/her, it should not be operating at night.
[+] brokenmachine|8 years ago|reply
Wow, that self driving car is total crap if it can't pick up that obstacle. Doesn't get much more obvious than that. Slowly walking from the left lane into your lane on a fairly straight road.

I bet it was more visible to the human eye than on that video as well. You'd have to see someone crossing the road like that.

Of course the pedestrian wasn't doing the smartest thing but I believe a human driver would have at least hit the brakes had they been actually looking forward and paying attention.

I thought the actual advantage of self-driving cars is that they're always meant to be looking forward and paying attention. That doesn't appear to have happened in this case.

[+] JohannesH|8 years ago|reply
In my mind this accident is on Uber no matter how you interpret the video.

Scenario 1: Lets say the pedestrian was visible to the naked eye and sensors. The model and safety operator still didn't "see" her and act in time. Who to blame? Uber.

Scenario 2: The lighting and environment was in a condition where neither the model nor the safety operator could see the road more than 10 feet in front of the car, yet neither thought it to be irresponsible to go at full speed. Who to blame? Uber.

I believe that:

1. The sensors should have detected the pedestrian from far away even if the lighting was bad at the time. I mean that's sort of the point with autonomous vehicles, that they are better at perceiving their surroundings and can make better decisions on how to act quicker than any human could.

2. The safety operators are not engaged enough in their tasks to be effective. I think people underestimate how boring it must have a job where 99% of the time you should do absolutely nothing other than stare straight ahead and be ready for a situation like this. This problem is hard to solve. Maybe we should be training the model on closed tracks and only release on the real roads when it passes some sort of test where it is put through various scenarios. Like a driving instructor for AIs.

For those of you who think the pedestrian is to blame. I agree that the pedestrian might have made a bad decision by expecting the cars to brake, however these situations occur all the time. She didn't dart across the road or jump in front of the car suddenly. Yesterday I helped an elderly guy across 4 lanes of traffic which took about 1 minute. All you can do in that situation is to hope you are visible to the drivers and that they will stop before running you over.

[+] stefan_|8 years ago|reply
If there is something that isn't going to help the public perception of autonomous cars at all, it's releasing a compressed to shit capture of another video showing a single camera angle from dozens.

I would say it's a deliberate attempt to manipulate if I didn't also strongly believe ignorance on part of the police department has lead them to believe that autonomous cars could even exit a parking lot without data from many more than this one camera, not to mention the vastly more useful LIDAR on top.

(That's before you consider the video angles shown here are just for dashcam purposes. The real cameras for the autonomous driving are in the sensor array on top of the roof)

[+] YeGoblynQueenne|8 years ago|reply
So now we all know- Elaine Herzberg did not run out in front of the car, as the police said, she was walking at a normal pace; she was not in the shadows- the camera footage is typically darker than human vision; and the reason why the first thing the driver knew of the crash was the sound of it is because she didn't have her eyes on the road.

And all the cars sensors, its superior perception of its environment and its superhuman reaction times were no use, without a human-level understanding of its environment to go with them. It couldn't tell that there was a person crossing the road in front of it and even if it did it didn't have a concept of what a person is and why it should try to avoid them, or in any case, it just didn't know what to do about it.

So can we now please roll back the off-the-charts hype about self-driving cars being safer than human drivers? It's abundantly clear that this is not yet the truth (not yet. yet). That's just not the state of the art, at this point. We're like the people jumping off towers with crazy "flying" apparatus, in the 18th century, because they were convinced they could fly that way.

Or maybe we should just stop pretending that what we really care about is safety and let's we just want to have cool tech toys to play with, no matter the consequences.

[+] TrainedMonkey|8 years ago|reply
There was almost a second where the woman was clearly visible and yet the car did not attempt to emergency break. I would not expect the vehicle to safely stop, but it could have definitely slowed down and reduce the energy of the collision.

This is not criticism against self-driving tech, I would not expect alert human driver to avoid collision either due to limited reaction time. With technology however, we should be able to do better than humans, particularly when it comes to reaction times. Clearly, there is still a some work to do.

[+] baking|8 years ago|reply
This was the original story: "After the Uber collision, the car continued traveling at 38 miles per hour, according to the Tempe police chief"

https://www.bloomberg.com/news/articles/2018-03-21/for-self-...

In other words, the car _never_ detected the pedistrian and never slowed down on its own. This has nothing to do with _when_ it saw her. It clearly didn't, yet both camera and LIDAR should have been able to.

[+] DoreenMichele|8 years ago|reply
A number of comments here have touched on the fact that she was apparently homeless. I spent 5.7 years homeless. Well before that, I had a college class on homelessness and did an internship at a homeless shelter. I am author of the website the San Diego Homeless Survival Guide.

I've made a few comments in other discussions about some of the ways her status may have contributed to this tragedy. I'm just going to link to them with a short identifying blurb. Hopefully, taking them out of context won't make this go weird places.

Potential suicidal tendencies:

https://news.ycombinator.com/item?id=16633727

Potentially jaywalking dangerously and darted out of nowhere:

https://news.ycombinator.com/item?id=16625242

Possibly poor health contributed to her death:

https://news.ycombinator.com/item?id=16625076

I am leaving this here in part because "She was high or crazy" is a common stereotype about homelessness and it tends to not be a compassionate view. There are myriad ways her status could have contributed to the situation without her being either high or crazy. Yes, she could have also been one (or both) of those two things. But plenty of housed people get high or have mental health issues and we don't hand wave off their deaths as "they must have been high or crazy."

So, my hope is to be respectful of people whose feeling is her status could have contributed while putting out hopefully better information. Yes, her status may have contributed. But there is no reason to complete that line of thinking with "because homeless people are usually crazy and/or junkies."

There is a huge shortage of affordable housing in this country. A link backing that up is provided in one of my previous comments. These days, a lot of homeless people are just poor and can't afford rent, even while employed.

[+] danso|8 years ago|reply
Wow, I had thought it impossible for the Uber AV to be in the right lane, and for the victim to be crossing left-to-right, for the police to claim that the victim couldn't be seen until it was too late. But I didn't expect the road to be so dark, given what we saw in the accident photos (which might have been over-exposed?) and Google Maps, which showed a lot of street/sidewalk lighting.

In the moment that we can fully see her, she does look unambiguously like a person walking a bike across the road (reports say there were plastic bags on the bike, but they weren't obvious/obstructive in the camera view). Is the AV's LIDAR expected to detect this kind of thing, even if it's too dark for human eyes?

The video of the Uber driver doesn't look great for the driver. I mean she doesn't look particularly engaged -- but I suspect that's what most of us would look like at the wheel. But she definitely seems to be looking downwards, right at the moment of impact.

Unless some other incriminating info is discovered, I hope that the driver isn't the sole focus of punishment (doesn't help that she's a convicted armed robber, albeit years ago). Being able to brake in time for the victim seems difficult even in most ideal and alert conditions. And I have to think human operators are going to suffer complacency when 95-99% of the time they never have to actually drive -- making that switch seems to be a situation ripe with problems.

I don't mean that Uber execs/testers/engineers (again, assuming there isn't other incriminating evidence) should be scapegoats. I hope the result involves regulations that add more transparency to reporting (especially in Arizona), and public debate about the expectations of AV and AI.

[+] jmharvey|8 years ago|reply
As presented, this looks like a pretty classic "overdriving your headlights" situation.

Even though they're made of retroreflective material, only two lane divider dashes at a time can be seen in the video, indicating something like 50 feet of visibility. Stopping from 38 mph takes 70+ feet from the time the brakes are applied (and human reaction time adds quite a bit more). Things (people, animals, stopped cars, road hazards) appear in the road fairly routinely. If you can't stop inside the area you can see, you're operating your vehicle recklessly.

[+] lewis500|8 years ago|reply
This is not what I expected at all. The original news reports based on the police comments made it seem like she was on the side of the road and then randomly darted out. That's not something I would really expect AV's to be able to cope with yet, if ever.

Instead what we see is a scenario that happens all the time: a pedestrian who is sleepy or perhaps with mental problems or on some substances...in any case not hyper alert...crossing a road without looking. It's not that busy at that time of night and I'm sure I have done the same thing. It's very discouraging and angering that these cars are being driven without the capability. I feel very disappointed. To me it's not super important whether an ordinary driver would've stopped: ordinary people drive drunk, exhausted and/or distracted all the time; but I hoped that AV's were already better than that.

[+] mrtksn|8 years ago|reply
Okay, isn't this the kind of a situation where the machine was supposed to excel?

The dynamic range of the video is very low so it looks like the victim comes out of complete darkness but weren't all these sensors supposed to see the obstacle even in complete darkness?

BTW, please consider the low dynamic range of the video when commenting on the human's ability to avoid that accident. After driving in the dark for a while your eyes will adapt and you'll be able to see much more details in the shadows than a regular video camera can record.

[+] HankB99|8 years ago|reply
This seems like a simple case. I would have expected the driver assist in my 2017 Subaru to have reacted to something in the road. I'm surprised that the much more sophisticated self driving system did not.
[+] Robin_Message|8 years ago|reply
Why are safety drivers not:

- working in pairs, so there is social pressure, conversation, and two pairs of eyes to increase alertness and safety?

- doing shifts of 30 - 45 minutes at most [1] (although they could potentially swap back and forth with a co-driver)

- issued a dumb-phone for emergencies and searched for entertainment devices (it's good enough for Amazon warehouse staff)

- being monitored by the driver-facing camera, with training and termination for drivers who can't hack it

- monitored automatically for attention using eye tracking or other methods, with the car safely stopping if lack of attention is detected

- required to take over on a random, regular basis for a short period to keep them engaged and attentive (and obviously, the car keeps driving if they don't take over, but they are marked down)

Due to the boredom, it is an extremely demanding job, but the way it is being done is clearly not good enough.

[1] I can't find anything published about how long the shifts are, but I'm guessing they are longer.

[+] imh|8 years ago|reply
The video makes it look like the woman popped out of the shadows and makes it look like this was unavoidable. But that's not what a person would really see. People's eyes have great dynamic range. Take this picture for example (not mine):

https://cdn-images-1.medium.com/max/2000/1*OQtewILwLl-EssbWY...

What you'd see in real life is much closer to the edited version on the right, while unedited pictures (the version on the left or the uber video released) would make it seem like you can't see shit. A driver paying attention probably would have seen this person from far away. At the very least, the video doesn't convince me this was unavoidable.

[+] slavik81|8 years ago|reply
Why is the car driving at full speed if it can't see the road ahead? Does the car know that visibility is low? What other sensors are they using? Did the car notice the pedestrian once they were fully lit up?

This video raises so many questions. I think we're going to be revisiting this incident over and over again in papers, reports and eventually textbooks.