top | item 17257239

NTSB: Autopilot steered Tesla car toward traffic barrier before deadly crash

509 points| nwrk | 7 years ago |arstechnica.com | reply

521 comments

order
[+] Animats|7 years ago|reply
NTSB:

• At 8 seconds prior to the crash, the Tesla was following a lead vehicle and was traveling about 65 mph.

• At 7 seconds prior to the crash, the Tesla began a left steering movement while following a lead vehicle.

• At 4 seconds prior to the crash, the Tesla was no longer following a lead vehicle.

• At 3 seconds prior to the crash and up to the time of impact with the crash attenuator, the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive steering movement detected.

This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera and Mobileye (at least in early models) vision software. It also recognizes lane lines and tries to center between them. It has a low resolution radar system which ranges moving metallic objects like cars but ignores stationary obstacles. And there are some side-mounted sonars for detecting vehicles a few meters away on the side, which are not relevant here.

The system performed as designed. The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane.[1] If the vehicle ever got into the gore area, it would track as if in a lane, right into the crash barrier. It won't stop for the crash barrier, because it doesn't detect stationary obstacles. Here, it sped up, because there was no longer a car ahead. Then it lane-followed right into the crash barrier.

That's the fundamental problem here. These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design. This is not an implementation bug or sensor failure. It follows directly from the decision to ship "Autopilot" with that sensor suite and set of capabilities.

This behavior is alien to human expectations. Humans intuitively expect an anti-collision system to avoid collisions with obstacles. This system does not do that. It only avoids rear-end collisions with other cars. The normal vehicle behavior of slowing down when it approaches the rear of another car trains users to expect that it will do that consistently. But it doesn't really work that way. Cars are special to the vision system.

How did the vehicle get into the gore area? We can only speculate at this point. The paint on the right edge of the gore marking, as seen in Google Maps, is worn near the point of the gore. That may have led the vehicle to track on the left edge of the gore marking, instead of the right. Then it would start centering normally on the wide gore area as if a lane. I expect that the NTSB will have more to say about that later. They may re-drive that area in another similarly equipped Tesla, or run tests on a track.

[1] https://goo.gl/maps/bWs6DGsoFmD2

[+] falcolas|7 years ago|reply
One more thing to note - anecdotal evidence indicates that Tesla cars did not attempt to center within a lane prior to an OTA update, after which multiple cars exhibited this "centering" action into gore sections (and thus required manual input to avoid an incident) on video.

To me, that this behavior was added via an update makes it even harder to predict - your car can pass a particular section of road without incident one thousand times, but an OTA update makes that one thousand and first time deadly.

Humans are generally quite poor at responding to unexpected behavior changes such as this.

[+] FireBeyond|7 years ago|reply
Meanwhile, Tesla is busy putting out press releases saying "We believe the driver was inattentive, and the car was warning him to put his hands on the wheel."

I am utterly, completely lacking in surprise that they didn't provide the relevant context, "... fifteen minutes prior."

This just looks... really bad for Tesla. It's more important to them to protect their poor software than drivers.

[+] raldi|7 years ago|reply
> The paint on the right edge of the gore marking, as seen in Google Maps, is worn

Indeed; here's a close-up of the critical area. Note that the darker stripe on the bottom of the photo is NOT the crucial one; the one that the car was supposed to follow is the much more faded one above that that you can barely see:

https://www.google.com/maps/@37.410912,-122.0757037,3a,75y,2...

(Note that I'm not blaming the faded paint; it's a totally normal situation on freeways that it's entirely the job of the self-driving car to handle correctly. But I think it was what triggered this fatal flaw.)

[+] nlawalker|7 years ago|reply
>> These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design. This is not an implementation bug or sensor failure.

A little off topic, but I'm curious: I usually use "by design" to mean "an intentional result." How do other people use the term? In this case, the behavior is a result of the design (as opposed to the implementation), but is surely not intentional; I would call it a design flaw.

[+] jfrankamp|7 years ago|reply
To follow on, there has already been reported autopilot drifting and mismanaging that spot on the freeway (although without this catastrophic result). That fact adds to this description/explanation.

Aside: the following another car heuristic is dumb. It's ultimately offloading the AI/decision making work onto another agent, an unknown agent. You could probably have a train of teslas following eachother following a dummy car that crashes into a wall and they'd all do it. A car 'leading' in front that drifts into a metal pole will become a stationary object and so undetectable.

[+] dekhn|7 years ago|reply
Although I don't work in self-driving cars, I do know a fair amount of ML and AI, and I have to be honest: if my bosses asked me to build this system, I would have immediately pointed out several problems and said that this is not a shippable product.

I expect any system that lets me drive with my hands off the wheels for periods of time deals with stationary obstacles.

What is being described here, if it's correct, is a literal "WTF" compared to how Autopilot was pitched.

I wouldn't be surprised if the US ultimately files charges against Tesla for wrongful death by faulty design and advertising.

[+] TheLoneAdmin|7 years ago|reply
Makes me wonder about the future of self-driving cars. I would think that the first thing that should be programmed in is to not run into objects that are not moving.

IMO, maybe the roads need to be certified for self-driving. Dangerous areas would be properly painted to prevent recognition errors. Every self-driving system would need to query some database to see if a road is certified. If not, the self-driving system safely disengages.

[+] spiderPig|7 years ago|reply
Kudos to the NTSB. Events like this and companies like Tesla are exactly why we need regulation and government oversight. Tesla's statement that the driver was given "several warnings" is just a flat out lie.
[+] databus|7 years ago|reply
The First Law of Autopilot should be: Don't run into shit. If this fails then it's not autopilot, it's "Autocrash".
[+] ckastner|7 years ago|reply
> This is the Tesla self-crashing car in action. Remember how it works. It visually recognizes rear ends of cars using a BW camera

I'm sure Tesla's engineers are qualified and it is certainly easy to second-guess them, but it is beyond me why they would even consider a BW camera in a context where warning colors (red signs, red cones, black and yellow stripes, etc.) are an essential element of the domain.

[+] ocdtrekkie|7 years ago|reply
Before it even entered the gore area, it likely centered between the actual lane of the highway and the exit lane. Bear in mind, when a lane forks, there is a time the lane is wider than average. And with the gore area lines worn, the car may have missed it entirely. Once it was centered to the gore area, presumably it didn't consider the lines under or directly in front of the car to be lane lines.
[+] binarybits|7 years ago|reply
Tesla cut ties with Mobileye in 2016, and this was a 2017 Model X, so it probably wasn't running Mobileye's software.
[+] retube|7 years ago|reply
> These vehicles will run into stationary obstacles at full speed with no warning or emergency braking at all. That is by design

Really? Really?? I mean if was designing a self-driving system pretty much the first capability I would build is detection of stuff in the way of the vehicle. How are you to avoid things like fallen trees, or stopped vehicles, or a person, or a closed gate, or any number of other possible obstacles? And how on earth would a deficiency like that get past the regulators?

[+] mirimir|7 years ago|reply
Shortly after the accident, a few other Tesla drivers reported similar behavior approaching that exit.

But on the other hand, gores like that can also trick human drivers. Especially if tired, with poor visibility. In heavy rain or whiteout, I typically end up following taillights. But slowly.

[+] baconmania|7 years ago|reply
What's your source for the claim that Tesla's system "doesn't detect stationary objects"? From the reference frame of a moving Tesla, both globally stationary and globally moving objects will appear to be in motion.
[+] seandougall|7 years ago|reply
> The white lines of the gore (the painted wedge) leading to this very shallow off ramp become far enough apart that they look like a lane.

True, but not one into which the car should have merged. Although crossing a solid white line isn’t illegal in California, it does mean crossing it is discouraged for one reason or another.

I love seeing the advances in tech, but it’s disheartening to see issues that could have been avoided by an intro driver’s ed course.

[+] privateSFacct|7 years ago|reply
Reservation cancelled.

While the anti-tesla news is bad, tesla needs to be more clear that this really is a failure of autopilot and their model - they can't expect a human to get used to something working perfectly, clearly is hands were close to the wheel in the very recent past (tesla is famous for not reading hands on wheel).

I'm hoping google can deliver something a bit safer.

[+] agumonkey|7 years ago|reply
I don't get it, the system relies on two kinds of sensors ? radars for rear-ends and optical cameras for broader decision making ? so that location confused the cameras and that was it ? the car has no ability to understand its surrounding outside crude parsing of the visual field (trained ML I suppose)...

that's quite fucked up.

[+] dogma1138|7 years ago|reply
MobileEye (EyeQ3 which is in AP1) does detection on chip including things like road sign recognition the “software” part is near non existent for them it’s more of a configuration than some sort of a software ANN model like what Tesla is using with AP2 and the NVIDIA Drive PX.
[+] MBCook|7 years ago|reply
“[Driver’s] hands were not detected on the steering wheel for the final six seconds prior to the crash. Tesla has said that Huang received warnings to put his hands on the wheel, but according to the NTSB, these warnings came more than 15 minutes before the crash.”

This kind of stuff is why I’ve lost all faith in Tesla’s public statements. What they said here was, for all intents and purposes, a flat out lie.

Clearly something went wrong here, but they lept to blaming everyone else instead of working to find the flaw.

[+] devy|7 years ago|reply
Add to that, the last bullet point from page 2 of the official preliminary report[1]:

   At 3 seconds prior to the crash and up to the time of impact with the crash attenuator,
   the Tesla’s speed increased from 62 to 70.8 mph, with no precrash braking or evasive
   steering movement detected.
That to me is a strong indicator that Tesla's AP2 can not recognize a crashed attenuator, probably one of the strongest argument that AP2 without LiDAR is unsafe and fatal (let alone of missing the redundancy in sensors and brakes[2]).

[1]: https://www.ntsb.gov/investigations/AccidentReports/Reports/...

[2]: https://arstechnica.com/cars/2018/04/why-selling-full-self-d...

[+] ebikelaw|7 years ago|reply
Musk prefers to lie to the public and benefits from the disintermediation of the media that various Internet distribution channels have allowed. Tesla often makes clearly false statements. SpaceX, when part of their heavy lift rocket crashed into the ocean, cut away from that scene to a shot of two happy smiling spokesmodels pretending to not know what had happened. Today what we need more than we ever needed it is skepticism, journalism, and vigorous investigation. You can’t take anything these companies are saying at face value.
[+] Maybestring|7 years ago|reply
>This kind of stuff is why I’ve lost all faith in Tesla’s public statements.

I don't understand why they would issue any statement other than, 'condolences, we're committed to safety, we're working with the NTSB to understand what happened'

[+] ckastner|7 years ago|reply
> Clearly something went wrong here, but they lept to blaming everyone else instead of working to find the flaw.

Precisely. As I mentioned in another comment: Autopilot indisputably failed.

They are (understandbly) shifting the discussion away from the technical issue (Autopilot failed) to the legal issue (who is to blame), because the latter is something that they can dispute.

[+] agildehaus|7 years ago|reply
Truly fixing the flaw would be to skip to Level 4 as Google decided to do, instead of shipping a system that many drivers get too comfortable with leading to accidents such as this one.

But skipping to Level 4 fairly clearly requires LIDAR at this point in history and Musk is on record saying he thinks it can be done with cameras and radar alone. So pride will prevent that.

[+] jakobegger|7 years ago|reply
Also, „hands were not detected“ does not mean that they really weren‘t on the wheel. Maybe someone who drives a Tesla can comment on how reliable the hand detection is.
[+] dogma1138|7 years ago|reply
It’s feature maybe? After all after insane and ludicrous mode kamikaze kinda fits.

I don’t understand why people had faith in AP 2.0 in the first place, especially since Tesla boasted that it took them only 6 months to develop it. And unlike AP 1.0 the regulated ADAS features were not enabled (officially as in outside of AP mode) on it for the longest time some still aren’t.

[+] jschwartzi|7 years ago|reply
If I were rapidly approaching a barrier at full speed on a highway, I might put my hands up and guard my face before the crash.
[+] agumonkey|7 years ago|reply
Musk has lost touch with reality if that's true..
[+] abalone|7 years ago|reply
> During the 18-minute 55-second segment, the vehicle provided two visual alerts and one auditory alert for the driver to place his hands on the steering wheel. These alerts were made more than 15 minutes prior to the crash.

Whoah. So there were NO alerts for 15 minutes prior to the crash. Compare this with Tesla's earlier statement:

> The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision.[1]

This gives a very different impression. They omitted the fact that there were no warnings for 15 minutes. Frankly that appears to be an intentionally misleading omission.

So basically the driver was distracted for 6 seconds while believing that the car was auto-following the car in front of it.

[1] https://www.tesla.com/blog/update-last-week’s-accident

[+] mymacbook|7 years ago|reply
Reading that initial report is terrifying. I am so glad the NTSB set the record straight that the driver had his hands on the wheel for the majority of the final minute of travel. Really makes me feel like Tesla was out to blame the driver from the get go. To be clear the driver is absolutely partially at fault, but my goodness autopilot sped up into the barrier in the final seconds — totally unexpected when the car has automatic emergency breaking.

Emergency breaking feels not ready for prime time. I hope there are improvements there. Don’t want to see autopilot disabled as a result of this, would rather Tesla use this to double down and apply new learnings.

Just so sad to hear about this guys death on his way to work - not the way I want to go. :(

[+] ckastner|7 years ago|reply
> His hands were not detected on the steering wheel for the final six seconds prior to the crash.

> Tesla has said that Huang received warnings to put his hands on the wheel, but according to the NTSB, these warnings came more than 15 minutes before the crash.

> Tesla has emphasized that a damaged crash attenuator had contributed to the severity of the crash.

These may or may not have been factors contributing to the death of the driver, and ultimately may or may not absolve Tesla from a legal liability.

However, the key point here is that without question, the autopilot failed.

It is understandable why Tesla is focusing on the liability issue. This is something that they can dispute. The fact that the autopilot failed is undisputable, and it is unsurprising that Tesla is trying to steer the conversation away from that.

The discussion shouldn't be either the driver is at fault or Tesla screwed up, but two separate discussions: whether the driver is at fault, and how Tesla screwed up.

[+] jackson1way|7 years ago|reply
Despite the autopilot failure, I find the battery failure quite remarkable too:

> The car was towed to an impound lot, but the vehicle's batteries weren't finished burning. A few hours after the crash, "the Tesla battery emanated smoke and audible venting." Five days later, the smoldering battery reignited, requiring another visit from the fire department.

Where is your LiPo god now? Batteries have more energy density than 20 years ago, ok. But they are also much more dangerous. Now imagine the same situation with Tesla's huge semi batteries. They'll have to bury them 6ft under, like Chernobyl's smoldering fuel rods. Minus the radiation.

[+] netsharc|7 years ago|reply
Dear Elon, want to start a website that rates how fake-newsy government-produced accident reports are? /S

"FDA said my farm is producing salmonella-infected chicken. Downvote their report on this URL!"

[+] gburt|7 years ago|reply
I am generally against often-called "excessive regulation," but the regulator -- perhaps FTC -- should aggressively prohibit the misleading marketing message here.

The entire problem manifests from calling this lane keeping mechanism "Autopilot." Tesla should be prohibited from using that language until they have achieved a provably safer self-driving level 3+.

The problem is exacerbated by Musk's aggressive marketing-driven language. Saying things like we're two years out from full self-driving (first said in 2015) and the driver was warned to put his hands on the steering wheel (15 minutes prior to the crash) makes Musk look like he is plainly the bad guy and attempting to be misleading.

"Provably safe" probably means some sort of acceptance testing -- a blend of NTSB-operated obstacle course (with regression tests and the like) and real world exposure.

[+] dcposch|7 years ago|reply
Tesla Autopilot makes it to HN pretty much every week now, almost never in a good way.

Every time, we have a big discussion about autopilot safety, AI ethics, etc.

What about lack of focus?

Tesla has already reinvented the car in a big way--all-electric, long range, fast charge, with a huge network of "superchargers". It's taken EV from a niche environmentalist pursuit to something widely seen as the future of automotive.

Why are they trying to tackle self-driving cars at the same time?

This feels like a classic mistake and case of scope creep.

Becoming the Toyota of electric is vast engineering challenge. Level 5 autonomous driving is an equally vast engineering challenge. Both represent once-in-a-generation technological leaps. Trying to tackle both at the same time feels like hubris.

If they just made great human-piloted electric cars and focused on cost, production efficiency, volume, and quality, I think they'd be in a better place as a business. Autopilot seems like an expensive distraction.

[+] menacingly|7 years ago|reply
Tesla has to realize these "shame the dead dude" posts are PR nightmares, right?

They are reason alone for me to never consider one, that a private moment for my family might end up a pawn in some "convince the public we're safe using any weasel stretch of the facts we can" effort.

If this is disruption, I'll wait for the old guard to catch up, lest I be disrupted into a concrete barrier and my grieving widow fed misleading facts about how it happened.

[+] RcouF1uZ4gsC|7 years ago|reply
After this incident and Tesla's response to it, I hope Tesla is sued and or fined into bankruptcy. Tesla is normalizing releasing not fully tested software to do safety-critical things, and literally killing people as a result. A message needs to be sent that this is unacceptable. In addition, their first response is a PR driven response that sought to blame to driver, and violated NTSB procedures. Safety is probably the most important thing to get right with these types of software and Tesla is nonchalantly sacrificing safety for marketing.
[+] kevinchen|7 years ago|reply
Tesla Autopilot should be recalled via the next OTA update.

The “Autopilot” branding implies that users need not pay attention, when in reality, the system needs interventions at infrequent but hard-to-predict times. If an engineer at Apple can’t figure it out, then the average person has no chance. Their software sets users up to fail. (Where failure means permanent disability or death.)

Inevitably, Musk fans will claim that recalling Autopilot actually makes Tesla drivers less safe. But here's the problem with Musk’s framing of Autopilot.

Sure, maybe it fails less often than humans. (We don't know whether we can trust his numbers.) But we do know that when it fails, it fails in different ways — Autopilot crashes are noteworthy because they happen in situations where human drivers would have no problem. That’s what people can’t get over. And it is why Autopilot is such a dangerous feature.

An automaker with more humility would’ve disabled this feature years ago. (Even Uber suspended testing after the Arizona crash!) With Musk, my fear is that more people will have to die before there is enough pressure from regulators / the public to pull the plug.

[+] MBCook|7 years ago|reply
So people are asking why the barrier wasn’t detected, and that’s fair.

Here’s another question: why wasn’t the ‘gore’ zone detected?

Why did the car thing it was safe to drive over and area with striped white lines covering the pavement?

It saw the white line on the side of that area and decided that was a land market but ignored the striped area you’re not supposed to drive on?

If you’re reading the lines on the pavement you have to try to look at all of them.

I don’t know if other cars, like those with MobileEye systems, do that but given Tesla’s safety claims they’d better be trying.

[+] mcguire|7 years ago|reply
Here's the most interesting quote to me:

"The crash created a big battery fire that destroyed the front of Huang's vehicle. "The Mountain View Fire Department applied approximately 200 gallons of water and foam" over a 10-minute period to put out the fire, the NTSB reported.

"The car was towed to an impound lot, but the vehicle's batteries weren't finished burning. A few hours after the crash, "the Tesla battery emanated smoke and audible venting." Five days later, the smoldering battery reignited, requiring another visit from the fire department."

Shouldn't it be possible to make the battery safe?

[+] userbinator|7 years ago|reply
This just reconfirms my belief about Tesla's "autopilot" --- most of the time it behaves like an OK driver, but occasionally makes a fatal mistake if you don't pay attention and correct it. In other words, you have to be more attentive to drive safely with it than without, since a normal car (with suspension and tires in good condition, on a flat road surface) will not decide to change direction unless explicitly directed to --- it will continue in a straight line even if you take your hands off the wheel.

Given that, the value of autopilot seems dubious...

[+] LinuxBender|7 years ago|reply
Disclaimer: Taboo comment ahead.

Subtle bugs in self driving cars would be a simple way to assassinate people with low cost overhead. One OTA update to a target and you could probably even get video footage of the job being completed, sent to the client all in one API call.

Surely by now someone must have completed a cost analysis of traditional contractors vs. having a plant at a car manufacturer.

Am I the only one thinking about this?

[+] jakelarkin|7 years ago|reply
self driving systems cant well reason about untrained scenarios or the intent of other humans on the road. I think the people have grossly underestimated how driving in an uncontrolled environment is really a general AI problem, which we're not even close to solving.
[+] cmurf|7 years ago|reply
Involuntary manslaughter usually refers to an unintentional killing that results from recklessness or criminal negligence, or from an unlawful act that is a misdemeanor or low-level felony (such as a DUI). (Wikipedia)

It's rather uncontroversial that this kind of accident falls under civil law, because there is some degree of liability involved in marketing a product as being safer than a human driver, but then fails in an instance where a human driver flat out would not fail: apples to apples. If the human driver is paying attention, which the autonomous system is always doing, they'd never make this mistake. It could only be intentional.

But more controversial and therefore more interesting to me, is to what degree the system is acting criminally, even if it's unintended, let alone if it is intended. Now imagine the insurance implications of such a finding of unintended killing. And even worse, imagine the total lack of even trying to make this argument.

I think a prosecutor must criminally prosecute Tesla. If not this incident, in the near future. It's an area of law that needs to be aggressively pursued, and voters need to be extremely mindful of treating AI of any kind, with kid gloves, compared to how we've treated humans in the same circumstances.