What is happening was clear to many since the start: Tesla incarnates the behavior of his founder, exaggerating the technology that is really available right now, and selling it as a product without the premise for it to be safe. A product that kinda drives your car but sometimes fails, and requires you to pay attention, is simply a crash that we are waiting to happen. And don't get trapped by the "data driven" analysis, like "yeah but it's a lot safer than humans" because there are at least three problems with this statement:
1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.
2. Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.
3. And lack of data: AFAIK there is not even enough data publicly available that tells us what is the crash rate of Teslas with auto-pilot enabled vs Teslas without auto pilot enabled, per kilometer in the same street conditions. That is, you need to compare only in the same streets where the auto pilot is able to drive.
Autopilot will rule the world eventually and we will be free of using the time differently (even if this was possible 100 years ago already stressing on public transportation instead of spending the money to have a car each one... this is sad. Fortunately in central Europe this happened to a degree, check for instance North Italy, France, ...). But till it is not ready, to plug it as a feature in a so immature way just to have an advantage over competitors, is terrible business practice.
> 1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.
This needs elaboration or citation. You're one of the only folks I've seen come straight out and make this point: that somehow a technology or practice that is objectively safer can still be "unethically safe" and should be shunned in favor of less safe stuff.
I don't buy it. This isn't individual action here, there's no utilitarian philosophy rejection that's going to help you.
In fact, medical devices seem actually counter to your argument. No one says an automatic defibrillator's decisions as to when to fire are going to be as good as a human doctor, and in fact these things have non-zero (often fatal) false positive rates. But they save lives, so we use them anyway.
Everyone talking about statistical evidence should take a look at this NHTSA report [1]. For example, "Figure 11. Crash Rates in MY 2014-16 Tesla Model S and 2016 Model X vehicles Before and After Autosteer Installation", where they are 1.3 and 0.8 per million miles respectively.
Unfortunately, this report seems to have shot itself in the foot by apparently using 'Autopilot' and 'Autosteer' interchangeably, so it leaves open the possibility that the Autopilot software improves or adds features to the performance of fairly basic mitigation methods, such as in emergency braking, while having unacceptable flaws in the area of steering and obstacle detection at speed. In addition, no information is given on the distribution of accident severity. It is unfortunate that this report leaves these questions open.
Even if these claims are sustained, there are two specific matters in which I think the NHTSA has been too passive: As this is beta software, it should not be on the on the public roads in what amounts to an unsupervised test that potentially puts other road users at risk. Secondly, as this system requires constant supervision by an alert driver, hands-on-the-wheel is not an acceptable test that this constraint is being satisfied.
4. There is another completely unknown variable - how many times would the autopilot have crashed if the human hadn't taken over. So Tesla's statistics are actually stating how safe the autopilot and human are when combined, not how safe the autopilot is by itself.
A lot of human-driven car accident victims have done nothing wrong at all.
Almost every driver thinks they're better than average.
Even when it's a stupid person dying from their stupidity, it's still a tragedy.
I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.
Agree with your point #3. If Tesla autopilot only works in some conditions, its numbers are only comparable to human drivers in these same conditions.
Designing for safety means that you take into account human behavior at every level and engineer the product to avoid those mistakes.
We already know that there is an area between drive assistant and automatic driving where people just can't keep their attention level where it needs to be. Driving is activity that maintains attention. People can't watch the road hand on the wheel when nothing happens and keep their level of attention up when the car is driving.
The way I see it, the biggest safety sin from Tesla is having experimental beta feature that that is known weak point for humans. Adding potentially dangerous experimental feature, warning about it, then washing your hands about it is not good safety engineering.
The news story points out how other car manufacturers have cameras that watch the driver for the signs of inattention. Driving assistant should watch both the driver and the road.
You can't have a driving assistant that can be used as an autopilot.
I have now talked with two people who have autopilot in their model S's and both said the problem with autopilot is that it is "too good". Specifically it works completely correctly like 80 - 90% of the time and as a result, if you use it a lot, you start to expect it to be working and not fail. If it beeps then to tell you to take over, you have to mentally get back into the situational awareness of the road and then decide what to do about it. If that lag time is longer then the time the car has given you to respond, you would likely crash.
Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.
Relevant quote from an article about the Air France 447 crash:
> To put it briefly, automation has made it more and more unlikely that ordinary airline pilots will ever have to face a raw crisis in flight—but also more and more unlikely that they will be able to cope with such a crisis if one arises.
And in the current climate of "all new things must be super extra safe to be used, even if it displaces much worse technology", I think machine driving cars will have a hard time being accepted. Imagine the autopilot is 100 times safer than human drivers. Full implementation would mean about a fatality a day in the US. It does not seem this would be acceptable by the public even though 30,000 annual human car fatalities could be avoided. Maybe we should use our education system to help heal us from the safety first (do nothing) culture we seems to gotten our selves into in the US.
This is a similar situation to nuclear power. With nuclear, all your waste can be kept and stored and nuclear has almost zero deaths per year. Contrast to the crazy abomination that is coal burning.
> There is additional controversy, it should be noted, about the proposed level 2. In level 2, the system drives, but it is not trusted, and so a human must always be watching and may need to intervene on very short notice -- sub-second -- to correct things if it makes a mistake. [...] As such, while some believe this level is a natural first step on the path to robocars, others believe it should not be attempted, and that only a car able to operate unsupervised without needing urgent attention should be sold.
As far as I can tell there is no way to resolve this that will be effective. Google spoke about this problem publicly and why they directly targeted level 4.
Train drivers are the closest thing and they have a lot of equipment that forces them to stay alert.
Personally I think that autopilot should only take over when the driver is unwell or that it thinks there will be a crash.
Even when driving full time drivers get distracted. So when the car is doing the driving humans will get bored and won't be able to respond in time.
Autopilot however is always on and never distracted but doesn't work in some cases.
Uncanny valley of autonomy I guess. Google has noticed this early on, and their answer was to remove the steering wheel completely from their campus-only autonomous vehicles. Either it’s very close to 100% level 5 (you can pay no attention whatsoever and trust the car 100%), or it’s explicitly level 2 (advanced ADAS essentially), there’s nothing in between that’s not inherently dangerous.
You have to learn how to use it properly and pay attention. I use it a lot and it can drive from San Francisco to LA pretty much without stopping. But every once in a while it does mess up and you just need to make sure you're watching and ready to take over quickly. I agree that it's good enough that people might stop paying attention, but they just need to realize that they have to hold the software's hand in these initial stages. As a matter of fact, being in the drivers seat able to take control makes me much more comfortable in a self driving Tesla than in the back seat of a much more advanced Waymo self driving car
> Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.
I suspect the unfortunate reality is people die on the journey to improvement. Once we decide this technology has passed a reasonable threshold, it now becomes a case of continual improvement. And as horrible as this sounds this is not new. Consider how much safer cars are than 20 years ago and how many lives modern safety features save. Or go for a drive in a 50 year of sports car and see how safe you feel. And in 50 years we'll look back at today's systems in the same way we look at the days of no seat belts.
I honestly don't understand why, if the driver does not take control when the car has sensed an issue, it does not just throw on hazards and roll slow or stop. Seems like the safest way to keep people from dying.
My Subaru has a "lane assist" feature which helps keep you in your lane while driving. It only gives a little bit of pressure in the direction you should be moving to help you out and if you don't do anything the car will still move out of the current lane. This certainly helps me stay more aware as I can't ever rely on the car to fully steer, but if my concentration lacks during a turn on the freeway I get a reminder that I need to turn if I'm not paying as much attention when I come upon a curve in the road or just drift to one side a bit. So it definitely helps me remember that I'm always in control.
OTOH, the car has probably saved me from at least one accident where traffic ahead of me suddenly slowed down right as my attention relaxed. The _only_ correct way of using these systems is to treat them as an extra level of safety on top of your responsibilities as a driver. An "extra set of eyes" to help you avoid an accident while leaving you as the primary driver of the vehicle.
Best solution is to keep the driver engaged, IE still holding the wheel and showing they are paying attention. It's how all the other cars with lane assist do it. Cant go more than 30 seconds of not touching the wheel before it complains.
It's not perfect but it sure beats the driver being so used to it they start doing other things like watching movies on their laptop.
This. I don't know I feel ambivalent about autopilot right now, as most on HN I always crave for new technologies, but the fact that something very dangerous can be very good most of the time and very bad occasionally make me weary of using it altogether. I know that's irrational because most often then not it probably saves you from your own mistakes.
Also, if Tesla intends to shield itself behind a beta status for their autopilot system until it is perfect, I think this beta status will remain even longer then the time GMail was in beta. At least, in this case they should own this problem and somewhat hardcode a meaningful warning at such intersections or do something.
> Specifically it works completely correctly like 80 - 90% of the time and as a result, if you use it a lot, you start to expect it to be working and not fail.
Different environment (and stakes), but I observed the same thing happening a couple of times during IT incident response. The automation crashed/failed/got into a situation it could not automatically fix, and the occurrence was rare enough that people actually panicked and it took them quite a while to fix what in the end was a simple issue. They just didn't have the "manual skill" (in one case, the tools to actually go and solve the issue without manually manipulating databases) anymore.
My first drive with auto pilot tried to follow into the back of a stopped car in a turning lane and randomly had phantom breaking issues like going under overpasses.
Not to mention there are things most people do defensively while driving that autopilot doesn't: anticipate a vehicle coming into my lane by both looking at the wheel position and the lateral movement. Autopilot ignores those pieces of information and data until it's in basically in your lane.
Personally I feel I have to be more on guard and attentive with it on because I know their are fatal bugs.
As someone with a car that has a much weaker system (ProPilot), I can see where that would be a problem. Its not really tempting to let ProPilot do it on its on, as it regularly wants to take exit ramps and occasionally pushes towards the middle of the road too much.
It seems like AutoPilot users take their hands off the wheel regularly, and I just don't see how that is safe with this type of system.
If people are using it, not paying attention, with the expectation that it will beep to tell you to take over that's a big problem. In situations like this divider issue it won't beep, it thinks everything is fine right up until it rams you into a stationary object. I think people may not be fully aware of all the potential failure modes of this tech?
> said the problem with autopilot is that it is "too good".
Isn't it true that the thing cannot detect a stationary object in its path if the vehicle is traveling at above 50kph? If that is true, then I think this is an extremely dangerous situation that the owners of these vehicles are in...
"too good" -> "completely correctly like 80 - 90%". I'm unable to make the two sentences make sense in the same statement, since 85% looks extremely low to be considered too good, given that the outcome is to crash if you do not pay attention in the 15%.
Indeed. If I have to be sober and paying attention when riding in a robot car, I might as well just drive it myself just to keep me from falling asleep. At which point, I'll just not bother with the robot car at all.
> Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.
The solution is easy, but probably not what many would like to hear: ban all "self-driving" or "near-self-driving" solutions from being activated by the driver, and only allow the ones that have gone through very rigorous and extensive government testing (in a standardized test).
When the government certifies the car for Level 4 or Level 5, and assuming the standardized test has strict requirements like 1 intervention at 100,000 miles or whatever is something around 10-100x better than our best drivers, then the car can be allowed on the road.
Any lesser technology can still be used in other cars, bit instead of being a "self-driving mode" it should actually just assist you in the sense of auto-braking when an imminent accident is about to happen, or maybe just warn you that an imminent accident is about to happen, which could still significantly improve driver safety on the road until we figure out Level 5 cars.
Week after the fatal accident Musk continues to promote Autopilot as safe by retweeting stuff like
https://twitter.com/Teslarati/status/980476745106231297
Tesla Model S navigates one of Vienna’s ‘crappiest’ roads on Autopilot
IIHS '17 study puts Autopilot at only reducing 40% accidents which is the same as their '16 study which had the same 40%
reduction for any car with Auto Emergency Braking.
Currently, the Tesla autopilot page has this as its top headline:
> Full Self-Driving Hardware on All Cars
Followed by this:
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
Getting past the arguments of whether Tesla should be calling this "Autopilot", is its claims about hardware true? Is the failure to detect stationary objects like a firetruck or a gore point solely as a result of currently-inadequate software? Because it doesn't sound like the sensor suite is anywhere near the AVs of Waymo or Uber.
I want to ask: why is Tesla's advanced suite of cameras and ultrasonic sensors unable to detect a big, yellow and black stationary barrier?
In combination with chevron markings that mean "never ever drive here". Either should be a globally overriding signal to emergency break or turn, not "keep following the white line".
If the answer is "during morning light, it doesn't see it", then they need to take the feature offline until the car is fitted with an appropriate LIDAR; or other technology that can detect stationary obstacles.
> Our data shows that Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out in 2015 and roughly 20,000 times since just the beginning of the year, and there has never been an accident that we know of. There are over 200 successful Autopilot trips per day on this exact stretch of road.
Which might be technically true, but conveniently ignores other, very similar crashes.
I'm going to wait five to ten years before I get my robot car. Let all those early adopters be the guinea pigs we need to reach that level of robot cars being safer then human drivers.
It seems tons of upper middle class to wealthy people are lining up to pay dearly with their wallets and lives to be a billionaire's crash test dummy.
Come to think of it, whatever happened to Tesla's self-driving effort? They claimed to be shipping Model 3 cars with all the hardware needed for self driving (not including a LIDAR). They produced videos. Most of this was 1-2 years ago. But they never got to deployment, or even to the point of demoing it for car magazine writers.
I wonder if this will be the case that really tests Tesla's blanket legal defense in court. (That accidents are due to less than "fully attentive" drivers.) The reaction time needed in that video to correct for the autopilot is downright scary.
It's a situation where if you were piloting yourself you're a lot more likely to have mentally mapped that you continue in a straight line at that point, so listing to the left is going to feel WRONG even when suddenly blinded by the sun. But if autopilot takes you there, and you're "lizard brain level" paying attention but didn't have that mental navigation map in your head, and the sun's in your eyes, you're going to take a few seconds to try to figure out how to correct so as not to make things worse. That's enough to hit the gore point in that video.
For all those who are clamoring for autopilot to go back into the lab consider this: I have autopilot, and I use it the way people are instructed to. I consider it a companion to my driving, a backup co-driver that has better reflexes. I am in control, and mentally steer the vehicle with my hands on the wheel with the machine. We are driving the car together and I overrule the machine when we disagree. Often the machine catches things before I am able to react. I am 100% convinced that my family is safer with "us" driving them then just me driving them solo.
What do you say to people like me, who, if you took the feature away, would be forced to reduce their safety? It's easy to call for its removal when you only focus on the subset of people who will have their safety increased by removing it (ie, the people, often through no fault of their own, who come to rely upon it more than it is designed for.) What about everyone else that currently benefits from it? Will you force them to go back to driving "solo" to protect those people?
It's not so simple as "Tesla is reckless by releasing this" -- this is providing real safety to people who are using it the way it is intended. Also, the release of autopilot as-is also makes it possible to accelerate the development of full autonomy, which will save lives that otherwise would be lost to accidents if it were to be created later in time. Any analysis which does not account for this dynamic is missing an important piece of the ethics of removing it.
The main question that is worth asking is if there is more Tesla could do to ensure autopilot is used as a co-driver. The name is a poor choice to start.
Auto-pilot will put massive pressure on states and municipalities to improve road maintenance - specifically lane indicators and fixes to "peculiar" roads/intersections etc. This will increase budgets, which will in turn create massive opposition to self-driving autos by those not wanting to pay more taxes for other people's fancy cars.
I travel/drive a good deal in the summer and for a few years now have silently (mostly) muttered to myself that if auto-pilot cars are going to use "painted lines" to navigate - they're "gonna have a bad time".
Poorly painted lines aside, think of how many "wtf" moments you have on the roads around your town - just down the road from my house is a road that just... ends (due to poor planning years ago). It's obvious to human drivers what to do (you're supposed to exit the paved road and drive over a hump to the dirt road that continues), but for auto-pilot? Not so much unless they make use of "data sharing" that would reveal the human solution to this screwed up piece of road.
> Wednesdsay, a Tesla spokeswoman told the I-Team, "Autopilot is intended for use only with a fully attentive driver," and that it "does not prevent all accidents - such a standard would be impossible - but it makes them much less likely to occur."
Actively veering towards stationary barriers in otherwise perfectly safe conditions is not what I would describe as reducing the likelihood of accidents.
Vehicles with autonomous features will save lives. Fewer people will be injured or killed in cars that have these features. However, some people will still be injured or killed, and those people will comprise a different set than would otherwise have been harmed. They or their families will seek restitution and hence, the way forward will be determined by the courts.
When i look at the video, I would probably drive wrongly for the first few seconds: The road lining is not there anymore, causing you to make mistakes. But people can correct quickly when there is an anomaly. Weird that autopilot didnt see the divider.
Biggest thing I gathered was the public safety barrier was damaged from a previous crash, 12 days earlier, so the driver basically hit a hard wall. This has more implications for the US’s infrastructure than Tesla’s safety IMO.
My former employer makes a camera-based lane departure warning system. Although I was not involved with its development, I do know that the amount of testing that they do on these systems is very extensive (hundreds of people involved in the test process).
I would be very interested in seeing a side-by side test of the LDW systems of major manufacturers. This might give us an idea if the problem is fundamental to the technology or is simply a deficiency in the engineering approach of one (or some) manufacturer(s).
[+] [-] antirez|8 years ago|reply
1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.
2. Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.
3. And lack of data: AFAIK there is not even enough data publicly available that tells us what is the crash rate of Teslas with auto-pilot enabled vs Teslas without auto pilot enabled, per kilometer in the same street conditions. That is, you need to compare only in the same streets where the auto pilot is able to drive.
Autopilot will rule the world eventually and we will be free of using the time differently (even if this was possible 100 years ago already stressing on public transportation instead of spending the money to have a car each one... this is sad. Fortunately in central Europe this happened to a degree, check for instance North Italy, France, ...). But till it is not ready, to plug it as a feature in a so immature way just to have an advantage over competitors, is terrible business practice.
[+] [-] ajross|8 years ago|reply
This needs elaboration or citation. You're one of the only folks I've seen come straight out and make this point: that somehow a technology or practice that is objectively safer can still be "unethically safe" and should be shunned in favor of less safe stuff.
I don't buy it. This isn't individual action here, there's no utilitarian philosophy rejection that's going to help you.
In fact, medical devices seem actually counter to your argument. No one says an automatic defibrillator's decisions as to when to fire are going to be as good as a human doctor, and in fact these things have non-zero (often fatal) false positive rates. But they save lives, so we use them anyway.
Bottom line, that point just doesn't work.
[+] [-] mannykannot|8 years ago|reply
Unfortunately, this report seems to have shot itself in the foot by apparently using 'Autopilot' and 'Autosteer' interchangeably, so it leaves open the possibility that the Autopilot software improves or adds features to the performance of fairly basic mitigation methods, such as in emergency braking, while having unacceptable flaws in the area of steering and obstacle detection at speed. In addition, no information is given on the distribution of accident severity. It is unfortunate that this report leaves these questions open.
Even if these claims are sustained, there are two specific matters in which I think the NHTSA has been too passive: As this is beta software, it should not be on the on the public roads in what amounts to an unsupervised test that potentially puts other road users at risk. Secondly, as this system requires constant supervision by an alert driver, hands-on-the-wheel is not an acceptable test that this constraint is being satisfied.
[1] https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF
[+] [-] smoyer|8 years ago|reply
[+] [-] emiliobumachar|8 years ago|reply
Almost every driver thinks they're better than average.
Even when it's a stupid person dying from their stupidity, it's still a tragedy.
I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.
Agree with your point #3. If Tesla autopilot only works in some conditions, its numbers are only comparable to human drivers in these same conditions.
[+] [-] Nokinside|8 years ago|reply
We already know that there is an area between drive assistant and automatic driving where people just can't keep their attention level where it needs to be. Driving is activity that maintains attention. People can't watch the road hand on the wheel when nothing happens and keep their level of attention up when the car is driving.
The way I see it, the biggest safety sin from Tesla is having experimental beta feature that that is known weak point for humans. Adding potentially dangerous experimental feature, warning about it, then washing your hands about it is not good safety engineering.
The news story points out how other car manufacturers have cameras that watch the driver for the signs of inattention. Driving assistant should watch both the driver and the road. You can't have a driving assistant that can be used as an autopilot.
[+] [-] ChuckMcM|8 years ago|reply
Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.
[+] [-] tlrobinson|8 years ago|reply
> To put it briefly, automation has made it more and more unlikely that ordinary airline pilots will ever have to face a raw crisis in flight—but also more and more unlikely that they will be able to cope with such a crisis if one arises.
https://www.vanityfair.com/news/business/2014/10/air-france-... (linked to from another HN post today: https://news.ycombinator.com/item?id=16757343 )
[+] [-] njarboe|8 years ago|reply
This is a similar situation to nuclear power. With nuclear, all your waste can be kept and stored and nuclear has almost zero deaths per year. Contrast to the crazy abomination that is coal burning.
[+] [-] schoen|8 years ago|reply
including
> There is additional controversy, it should be noted, about the proposed level 2. In level 2, the system drives, but it is not trusted, and so a human must always be watching and may need to intervene on very short notice -- sub-second -- to correct things if it makes a mistake. [...] As such, while some believe this level is a natural first step on the path to robocars, others believe it should not be attempted, and that only a car able to operate unsupervised without needing urgent attention should be sold.
[+] [-] icc97|8 years ago|reply
Train drivers are the closest thing and they have a lot of equipment that forces them to stay alert.
Personally I think that autopilot should only take over when the driver is unwell or that it thinks there will be a crash.
Even when driving full time drivers get distracted. So when the car is doing the driving humans will get bored and won't be able to respond in time.
Autopilot however is always on and never distracted but doesn't work in some cases.
[+] [-] throwaway84742|8 years ago|reply
[+] [-] omarforgotpwd|8 years ago|reply
[+] [-] Gustomaximus|8 years ago|reply
I suspect the unfortunate reality is people die on the journey to improvement. Once we decide this technology has passed a reasonable threshold, it now becomes a case of continual improvement. And as horrible as this sounds this is not new. Consider how much safer cars are than 20 years ago and how many lives modern safety features save. Or go for a drive in a 50 year of sports car and see how safe you feel. And in 50 years we'll look back at today's systems in the same way we look at the days of no seat belts.
[+] [-] cam_l|8 years ago|reply
[+] [-] blktiger|8 years ago|reply
OTOH, the car has probably saved me from at least one accident where traffic ahead of me suddenly slowed down right as my attention relaxed. The _only_ correct way of using these systems is to treat them as an extra level of safety on top of your responsibilities as a driver. An "extra set of eyes" to help you avoid an accident while leaving you as the primary driver of the vehicle.
[+] [-] dawnerd|8 years ago|reply
It's not perfect but it sure beats the driver being so used to it they start doing other things like watching movies on their laptop.
[+] [-] doe88|8 years ago|reply
Also, if Tesla intends to shield itself behind a beta status for their autopilot system until it is perfect, I think this beta status will remain even longer then the time GMail was in beta. At least, in this case they should own this problem and somewhat hardcode a meaningful warning at such intersections or do something.
[+] [-] chronid|8 years ago|reply
Different environment (and stakes), but I observed the same thing happening a couple of times during IT incident response. The automation crashed/failed/got into a situation it could not automatically fix, and the occurrence was rare enough that people actually panicked and it took them quite a while to fix what in the end was a simple issue. They just didn't have the "manual skill" (in one case, the tools to actually go and solve the issue without manually manipulating databases) anymore.
[+] [-] pja|8 years ago|reply
[+] [-] base698|8 years ago|reply
Not to mention there are things most people do defensively while driving that autopilot doesn't: anticipate a vehicle coming into my lane by both looking at the wheel position and the lateral movement. Autopilot ignores those pieces of information and data until it's in basically in your lane.
Personally I feel I have to be more on guard and attentive with it on because I know their are fatal bugs.
[+] [-] bobsil1|8 years ago|reply
[+] [-] _ea1k|8 years ago|reply
It seems like AutoPilot users take their hands off the wheel regularly, and I just don't see how that is safe with this type of system.
[+] [-] ZoFreX|8 years ago|reply
[+] [-] buvanshak|8 years ago|reply
Isn't it true that the thing cannot detect a stationary object in its path if the vehicle is traveling at above 50kph? If that is true, then I think this is an extremely dangerous situation that the owners of these vehicles are in...
[+] [-] antirez|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] EliRivers|8 years ago|reply
[+] [-] jijojv|8 years ago|reply
[+] [-] jve|8 years ago|reply
[+] [-] mozumder|8 years ago|reply
That's what 'too good' is supposed to be.
[+] [-] mtgx|8 years ago|reply
The solution is easy, but probably not what many would like to hear: ban all "self-driving" or "near-self-driving" solutions from being activated by the driver, and only allow the ones that have gone through very rigorous and extensive government testing (in a standardized test).
When the government certifies the car for Level 4 or Level 5, and assuming the standardized test has strict requirements like 1 intervention at 100,000 miles or whatever is something around 10-100x better than our best drivers, then the car can be allowed on the road.
Any lesser technology can still be used in other cars, bit instead of being a "self-driving mode" it should actually just assist you in the sense of auto-braking when an imminent accident is about to happen, or maybe just warn you that an imminent accident is about to happen, which could still significantly improve driver safety on the road until we figure out Level 5 cars.
[+] [-] bobsil1|8 years ago|reply
Street View: https://www.google.com/maps/@37.6346515,-122.104109,3a,75y,8...
[+] [-] jijojv|8 years ago|reply
IIHS '17 study puts Autopilot at only reducing 40% accidents which is the same as their '16 study which had the same 40% reduction for any car with Auto Emergency Braking.
[+] [-] danso|8 years ago|reply
> Full Self-Driving Hardware on All Cars
Followed by this:
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
http://web.archive.org/web/20180323054727/https://www.tesla....
Getting past the arguments of whether Tesla should be calling this "Autopilot", is its claims about hardware true? Is the failure to detect stationary objects like a firetruck or a gore point solely as a result of currently-inadequate software? Because it doesn't sound like the sensor suite is anywhere near the AVs of Waymo or Uber.
[+] [-] dannyw|8 years ago|reply
In combination with chevron markings that mean "never ever drive here". Either should be a globally overriding signal to emergency break or turn, not "keep following the white line".
If the answer is "during morning light, it doesn't see it", then they need to take the feature offline until the car is fitted with an appropriate LIDAR; or other technology that can detect stationary obstacles.
[+] [-] oldgradstudent|8 years ago|reply
> Our data shows that Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out in 2015 and roughly 20,000 times since just the beginning of the year, and there has never been an accident that we know of. There are over 200 successful Autopilot trips per day on this exact stretch of road.
Which might be technically true, but conveniently ignores other, very similar crashes.
[1] https://www.tesla.com/blog/what-we-know-about-last-weeks-acc...
[+] [-] paul7986|8 years ago|reply
It seems tons of upper middle class to wealthy people are lining up to pay dearly with their wallets and lives to be a billionaire's crash test dummy.
[+] [-] Animats|8 years ago|reply
[+] [-] abalone|8 years ago|reply
It's a situation where if you were piloting yourself you're a lot more likely to have mentally mapped that you continue in a straight line at that point, so listing to the left is going to feel WRONG even when suddenly blinded by the sun. But if autopilot takes you there, and you're "lizard brain level" paying attention but didn't have that mental navigation map in your head, and the sun's in your eyes, you're going to take a few seconds to try to figure out how to correct so as not to make things worse. That's enough to hit the gore point in that video.
[+] [-] gfodor|8 years ago|reply
What do you say to people like me, who, if you took the feature away, would be forced to reduce their safety? It's easy to call for its removal when you only focus on the subset of people who will have their safety increased by removing it (ie, the people, often through no fault of their own, who come to rely upon it more than it is designed for.) What about everyone else that currently benefits from it? Will you force them to go back to driving "solo" to protect those people?
It's not so simple as "Tesla is reckless by releasing this" -- this is providing real safety to people who are using it the way it is intended. Also, the release of autopilot as-is also makes it possible to accelerate the development of full autonomy, which will save lives that otherwise would be lost to accidents if it were to be created later in time. Any analysis which does not account for this dynamic is missing an important piece of the ethics of removing it.
The main question that is worth asking is if there is more Tesla could do to ensure autopilot is used as a co-driver. The name is a poor choice to start.
[+] [-] brandmeyer|8 years ago|reply
That's what the NTSB is for.
[+] [-] riffic|8 years ago|reply
https://www.transalt.org/news/releases/9545
[+] [-] josefresco|8 years ago|reply
I travel/drive a good deal in the summer and for a few years now have silently (mostly) muttered to myself that if auto-pilot cars are going to use "painted lines" to navigate - they're "gonna have a bad time".
Poorly painted lines aside, think of how many "wtf" moments you have on the roads around your town - just down the road from my house is a road that just... ends (due to poor planning years ago). It's obvious to human drivers what to do (you're supposed to exit the paved road and drive over a hump to the dirt road that continues), but for auto-pilot? Not so much unless they make use of "data sharing" that would reveal the human solution to this screwed up piece of road.
[+] [-] newnewpdro|8 years ago|reply
Actively veering towards stationary barriers in otherwise perfectly safe conditions is not what I would describe as reducing the likelihood of accidents.
[+] [-] 11thEarlOfMar|8 years ago|reply
[+] [-] jv22222|8 years ago|reply
What I am noticing right now is that if I were able to afford a Tesla today, there is no way I would use Autopilot.
I will still buy a Tesla if I can ever afford it, I'll just leave autopilot well alone.
I don't know what gets me back from this mental place that seems to be firmly set in my mind now.
[+] [-] TeeWEE|8 years ago|reply
[+] [-] brennankreiman|8 years ago|reply
[+] [-] w_t_payne|8 years ago|reply
I would be very interested in seeing a side-by side test of the LDW systems of major manufacturers. This might give us an idea if the problem is fundamental to the technology or is simply a deficiency in the engineering approach of one (or some) manufacturer(s).