top | item 17308707

This Test Shows Why Tesla Autopilot Crashes Keep Happening

32 points| knuththetruth | 7 years ago |jalopnik.com | reply

55 comments

order
[+] rayiner|7 years ago|reply
> Tesla has always been clear that Autopilot doesn’t make the car impervious to all accidents

This makes it sound like the problem is that there are edge cases that aren’t handled. In reality, the test shows that Autopilot is only designed to handle the special case of lane keeping and following the car in front of you, but is too simplistic to actually handle the general case of driving the car. It’s a very advanced cruise control, not a self-driving system that still has kinks to work out.

[+] noncoml|7 years ago|reply
It’s clear the Tesla is dealing with this as a PR/marketing problem, but the fail miserably even at that.

If you think it’s a PR/marketing problem and the system operates as it should, then stop calling it Autopilot and stop advertising cars with full autopilot capabilities*

* “Theoretically the hardware should be enough to enable full auto pilot some time in the future. At the moment use this other Autopilot, which is not really autopilot. Btw we made this really cool autopilot video, but it’s not the Autopilot you have in your car.”

[+] manicdee|7 years ago|reply
The marketing problem is lay people thinking that aircraft autopilots are magical self flying machines when they often require the pilot to provide corrective input, especially during “autoland.”

If you think Tesla Autopilot needs another name, stop operating on mystical beliefs like Rei-Ki and magical self-flying autopilot and learn what any aircraft autopilot is actually capable of.

[+] King-Aaron|7 years ago|reply
I can understand Tesla's position in saying that the driver should still be aware of their surroundings. But. Avoiding stationary objects should be the first objective of a system like this in my opinion, especially if the object is directly in the projected path the vehicle is travelling in.
[+] mirimir|7 years ago|reply
Sure, but such systems won't work at all if they don't ignore such stationary objects as bridge abutments. And they lack the angular resolution to determine whether stuff is directly ahead, or off to one side. Plus the fact that roads curve.

Or, as in the Thatcham Research video, stuff that's hidden by a leading vehicle. I always make an effort to know what vehicles far ahead are doing. If necessary, I shift off lane occasionally to check for hidden stuff. And when I'm behind large vehicles, I don't follow so closely that I can't see what's ahead.

But if you're going to do all that, why use Autopilot?

[+] manicdee|7 years ago|reply
The only objective of Tesla Autopilot is Traffic Aware Cruise Control and lane keeping. That’s it. End of story. You are there to supervise operation and take action when TACC and LK are insufficient for the environment.

Autopilot when used correctly can significantly reduce fatigue because you are not focussed on the minutiae of keeping pace with the car in front of you and keeping the car in the assigned lane. This frees your attention up for higher level operation such as observing the traffic further ahead, scanning for hazards, and enjoying the scenery to your left and right.

Discerning stationary hazards from stationary scenery is not part of Autopilot’s feature set right now, that is your job as the operator.

[+] adt2bt|7 years ago|reply
Autopilot-bashing articles are becoming more and more popular as people love to gawk at accidents. I believe a lot of the comments on these threads are no more than online rubbernecking.

When these things pop up, I'd love to know the answers to some of these questions.

1. What is the rate of accidents for Tesla drivers with and without Autopilot on similar driving terrain.

2. How 'correctable' are these flaws in Autopilot? I doubt someone is programming in what to do in every situation. Is the solution to twiddle with some weights and pray your ML pipeline spits out a better version? I'm guessing the true answer lies in the middle, but I don't have the technical background to know for sure.

3. I see a lot of references to Waymo leading the pack of would-be self driving vehicles. What is it that Waymo has done that makes their self-driving tech so much better?

4. Are we okay with accepting the costs of self driving cars given their potential in the future? Most major transport-related revolutions have come with a significant human cost at the outset as the early adopters accept a large amount of risk to push the technology forward. With time and usage, things stabilize and become safer. Flying was quite dangerous early on, too.

[+] batmansmk|7 years ago|reply
I'll try to answer as best as I can.

1. Your question seems mostly answered in this recent article https://www.wired.com/story/tesla-autopilot-safety-statistic...

2. I don't know much about architecture / model used by Tesla. I don't have open access to the accident's details. It is challenging for most people to publicly and answer your question based on facts. Sorry!

3. Again, I would say only very limited number of people have a clear vision about what's the current status. Some claim it is hard for Tesla to manufacture and design cars, solar panels, batteries and designing an autopilot at the same time. It requires a very broad diversity of skills that may be hard to steer and build from the ground up.

4. Fair enough, point taken. My personal complain is not about the technology in general but about Tesla's marketing. I hope the first planes were not claiming to be safer. The Titanic did though :).

[+] manicdee|7 years ago|reply
What Waymo have done is not release a product for sale that can even remotely be confused for some level of autonomy. They have ensured this by not releasing a product.

Autopilot is not autonomous. It needs human supervision, and the fact that some simplistic “autonomy scale” includes “requires human supervision” is the largest part of this problem.

If it requires any supervision, it is not autonomous.

Any discussion of autonomy that even mentions Autopilot is automatically null and void. I would consider it the autonomy equivalent of Godwin’s law: in any discussion about autonomy the chance of comparing the safety of an autonomous system to Autopilot or Cruise Control approaches 1 and the first person to mention the comparison automatically loses the argument.

[+] nopriorarrests|7 years ago|reply
3. Waymo is R&D project at this moment. They are solving hard technical problem and disregard associated costs (such as LIDAR).

Tesla claims that hardware in model 3 is capable of self-driving once software catches up. That allows them to charge 3K USD for "FSD package".

These are two very different approaches to the same problem.

[+] nanis|7 years ago|reply
It seems to me the kids who program these toys have never driven anything other than the RC model they built to get this job. And, of course, it drives fine on the toy "roads" they set up on the carpet in the living room. Real world, not so much.
[+] stephengillie|7 years ago|reply
From a recent article[0]:

> When a car is moving at low speeds, slamming on the brakes isn't a big risk. A car traveling at 20mph can afford to wait until an object is quite close before slamming on the brakes, making unnecessary stops unlikely. Short stopping distances also mean that a car slamming on the brakes at 20mph is unlikely to get rear-ended.

But the calculation changes for a car traveling at 70mph. In this case, preventing a crash requires slamming on the brakes while the car is still far away from a potential obstacle. That makes it more likely that the car will misunderstand the situation—for example, wrongly interpreting an object that's merely near the road as being in the road. Sudden braking at high speed can startle the driver, leading to erratic driving behavior. And it also creates a danger that the car behind won't stop in time, leading to a rear-end collision.

[0]https://news.ycombinator.com/item?id=17274179

[+] pwinnski|7 years ago|reply
Computers have one huge advantage over humans: they can process a lot of information very quickly.

Humans have one huge advantage over computers: we pattern-match instinctually, building mental models of our surroundings.

Tesla is taking advantage of the first fact, but by avoiding LIDAR, they're falling victim to the second fact.

We humans don't have LIDAR, so it makes sense that a car without LIDAR should be able to match us, but we have brains and visual systems that far exceed anything available today or at any point in the near future when it comes to pattern-matching.

40,100 vehicle deaths in 2017 in the US, most of them the result of distracted driving. It's a shame that Tesla's system performs worst at exactly the point that humans need the most help.

[+] MarkMMullin|7 years ago|reply
LIDAR is certainly useful in building up point clouds, especially when coupled with RADAR, however the devil always lies in the details

LIDAR is higher resolution than radar, but often slower - counterintuitive to be sure, but such is the case - i.e. this large volume of data can lag reality by some small delta, and at 60MPH, that can add up.

Uber does use LIDAR and RADAR, however the informed external commentary I've seen, admittedly guesswork based on the NTSB reports seems to indicate it was a fusion error behind the AZ fatality. Sensor fusion is a beastly complex problem on top of miserable calibration exercises :-(

More importantly, AI only exists in the minds of marketers, media, and the improperly informed. This is all just pattern recognition - we're a ways off from having a system understand that if an occluding obstacle moves out of the way, then high priority previous unknown information now exposed needs to be checked. I'm sure such tests will get hardwired into current systems, however the system is then limited by what does get hardwired into it. At the end of the day a visual system, no matter how sophisticated is still only a visual system.

[+] fanzhang|7 years ago|reply
In that video, does the Tesla try to brake, and brake hard?

This seems to be the case -- if you watch the video the Tesla is at a full stop about 10 feet after the car, implying it must have hit the brakes hard way beforehand.

In that case, I don't blame Telsa for crashing -- you literally are baiting it into the crash a la a bull with a flag. You make it follow one car, prevent it from changing lanes, and then have an obstacle parked in the middle of the lane.

I'm all for getting Tesla to be safer. The accident count has gotten way too high. But this seems to be a test that is rigged, and Tesla did break.

[+] rhino369|7 years ago|reply
>In that case, I don't blame Telsa for crashing -- you literally are baiting it into the crash a la a bull with a flag. You make it follow one car, prevent it from changing lanes, and then have an obstacle parked in the middle of the lane.

This situation is pretty common on the road. And a lot of humans who aren't paying attention get caught by it.

But it's not impossible to avoid, you have to be watching ahead of the car in front of you. And if you can't see in front of the car in front of you, you gotta have a lot more space so you can break in tiime.

[+] bigiain|7 years ago|reply
I don't see that setup as anything out of the normal skill required of human drivers. Cars change lanes from in front of you all the time, sometimes because there's something stopped on the road in front of them.

The test in only "rigged" in the way a unit test looking at a specific edge-case is "rigged". Yeah, it's not something that happens every day, but there's very few human driver who wouldn't have perfectly safely followed the Merc into the right hand lane and easily missed that inflatable car. Most of us would have noticed the stationary car in front of the car we were following, and quite likely had already changed lanes before the Merc did.

[+] cjhopman|7 years ago|reply
> In that video, does the Tesla try to brake, and brake hard?

It's different takes edited together. Note the cameraman near the dummy car as the tesla approaches (you can see him and part of the video was shot by him).

> In that case, I don't blame Telsa for crashing

And in the case where it didn't brake? Would you blame Tesla then? Well... none of Tesla's driver assistance can handle that case, there's no way that they would brake. It will not brake for a stationary object (unless that object was previously moving).

[+] oldcynic|7 years ago|reply
You don't blame the Tesla?

If I did that the police and my insurance would blame me. Regardless of if I were braking hard. I'd almost certainly end up with points on my licence if I survived.

[+] pwinnski|7 years ago|reply
According to the NTSB, in the recent fatal case in which a Tesla drove directly into concrete on a highway, it was still accelerating at the moment of impact.

This is a very serious and consistent issue for Tesla's so-called "autopilot," as the article highlights with links to a variety of recent cases.

[+] faitswulff|7 years ago|reply
So...why is it that the default autopilot behavior isn't to slow down drastically if the driver doesn't initiate manual control fast enough?
[+] cjhopman|7 years ago|reply
I don't think you understand. The Tesla autopilot there thinks it's doing a great job because it thinks there's just an empty lane ahead of it (it's model of the world is basically that all stationary things are just pictures on the ground).
[+] adjkant|7 years ago|reply
Because it's been shown that it does not account in any way for stationary objects. It doesn't see any danger at all ahead.
[+] savrajsingh|7 years ago|reply
Autopilot is currently an aspirational name. Perhaps the name should be changed.
[+] mikeash|7 years ago|reply
So, why do non-autopilot crashes keep happening?
[+] Boxbot|7 years ago|reply
Same reason the autopilot crashes happen: Driver isn't paying attention.

Problem is that the autopilot system makes it harder for the driver to pay attention.

[+] nodesocket|7 years ago|reply
The video seems ludicrous. They fail to show that same test with a Tesla in autopilot crashing into the back of a car in manual mode. I doubt a manual driver would be able to ovoid that rear-end accident. I could be wrong, but they should demonstrate it with scientific results and facts instead of making fear-mongering promotional videos.
[+] craftyguy|7 years ago|reply
The video seems plausible. I mean, I regularly avoid road hazzards I cannot see until the car in front of me moves to avoid them. The key is to not be an ass and tailgate the person in front of you so you have time to react.
[+] adjkant|7 years ago|reply
Ludicrous and not 100% scientifically sound are two very different things. The Tesla doesn't appear to brake at all going in, as has been the case in actual crashes. A driver may not be able to avoid any accident but almost certainly could apply brakes to some extent. That braking can be the difference between life and death.
[+] mirimir|7 years ago|reply
Any human paying attention would follow the leading vehicle as it swerved. That was not at all an abrupt swerve. Also, most people know not to focus just on the leading vehicle. And to back off when the can't see around it.
[+] stephengillie|7 years ago|reply
Given the Model 3's poor stopping distance, you may unfortunately be correct that even an aware driver couldn't stop in that space.

  Car           distance (feet)
  Model 3(orig) 152
  Model 3 (new) 133
  F-150         129
  Model X       127
  Camry Hybrid  125
  F-150 Lariat  119
  Model S       118 
  Porsche P.GTS 110 - Panamera 
  Chrysler 300S 109