I keep wondering what the customer benefit of Level 2 autopilot is if not to lower your attention and relax your mind. Tesla's "out" is that drivers are supposed to retain full attention and oversight of the autopilot system -- but then if you strictly follow this rule, what is the benefit of autopilot?
I can see the benefit to Tesla and future Tesla customers of essentially crowdsourced fleet learning. But what is the benefit right now if you strictly follow the rule of remaining alert enough to intervene at a second's notice?
In previous threads the best explanation I got from Tesla owners was that it frees you from the "details of physically driving" so you "can now supervise instead."[1] That just seems suspect. Supervising autopilot to the degree required to correct sudden mistakes seems, to me, to be probably very close to the mental load of the details of steering yourself.
Is the article available to Europeans somewhere? All I'm getting is a message with this:
> Unfortunately, our website is currently unavailable in most European countries. We are engaged on the issue and committed to looking at options that support our full range of digital offerings to the EU market. We continue to identify technical compliance solutions that will provide all readers with our award-winning journalism.
On the other hand, I'm wondering how much I want to read an article from a website where they must track me to when I just want to read something...
I'd suggest flagging all articles posted here that prevent EU access. The content writers kinda have a right to deny you access, but we don't need to drive traffic in direction of those that refuse to handle your privacy well.
It's also quite disconcerting to consider the implications of this. Since we 'opt in' to the articles we click on, tracking what individuals read and then building a profile based off of that would be quite informative, and quite invasive.
And then sharing, trading (to expand the profile), and selling it to other companies? Really nasty stuff.
Really goes to show how what they do with the reader's data that they collect is actually shady to the point of illegality in large swathes of the world.
Exactly, while I have access to VPN and other means of bypassing the block, I have no intention of supporting such lackadaisical solutions to our privacy.
This is so stupid, they are still infracting on European Citizens that just happen to not be in Europe. For example I can get to the page but I am in Switzerland which is not part of the EU yet smack in the middle of Europe.
I'm in the EU, but since I'm looking at it from a work computer, and I guess our network exits in the US, I can see the article, so they are in breach of GDPR even with their stupid block.
People always lament about how Tesla tries to mislead customer's about its capability, but is there actually data that shows this is truly the case among Tesla owners? How many times does "attention needed" beep is needed before customer can be considered reasonably informed that it is not fully self-driving?
Even among non-Tesla cars, whose manufacturers don't try to "mislead", 71 percent of people believe automatic emergency braking can avoid all crashes [0]. Is this percentage higher or lower for Tesla, and is that difference warranted?
[1] Quote from above "When traveling at 45 mph and approaching a static vehicle, a scenario designed to push systems beyond the stated
limitations, the systems designed to prevent crashes reduced speeds by 74 percent overall and avoided crashes in 40
percent of scenarios. In contrast, systems designed to lessen crash severity were only able to reduce vehicle speed by 9
percent overall."
How many non-Autopilot cars have crashed into parked vehicles? It happens a lot, to the extent that a lot of states have “move over” laws explicitly designed to reduce it.
Does Autopilot actually make the problem worse, or is it just more newsworthy?
Has Tesla blamed the driver yet? They might get away with that here. The road was not divided and unsuitable for their lane-keeping system. The pavement markings are unusual.[1]
Of course, as usual, their obstacle detection failed to detect a stationary obstacle that didn't look like the rear end of a car in the same lane. We need a minimum standard for a vehicle which takes automated control of steering and braking. It must reliably stop for obstacles.
I remember reading people saying that insurance premiums will plummet for Teslas because of how much safer they will be on the road.
Today I read Tesla is getting into the insurance business because premiums are getting out of hand. It makes some sense since internally they’ll have more data they can use to deny claims.
I live in Laguna, cycled by this accident today, own a Tesla, and can attest to consistent autopilot issues in this section of the highway. The road widens with a turnoff to the right, and in my experience that is exactly the direction autopilot wants to go every time, rather than follow the center line, even when it is tracking a lead vehicle that does so. There are no lines to the right, so I’m not quite sure why it does this. But, I can understand how an innattentive driver might end up in this situation because until this point in the road autopilot works great.
I really wish all the car manufacturers would stop with this bullshit "you still have to be driving it" self driving nonsense.
Claiming that the driver is still responsible for driving the car, while using a system designed to encourage the "driver" to not pay attention is a dumb idea, and all I can see is this kind of thing leading to regulations the delay actual self driving cars.
I'm not a self-driving car fanboy or anything. I don't think it's just around the corner, but it's clearly going to happen /eventually/ and I can't imagine it being more dangerous than regular drivers. But this kind of "self driving but not" features feel like the sort of thing that calls to government agencies for regulation, and sufficiently "dumb" mistakes from these systems seems like the sort of thing that triggers a reactive over-regulation.
I don't follow it super closely but there is exactly 1 car manufacture that I can think of guilty of pushing the self driving myth. The established car makers seem to market it as an assist, not as an autopilot. Tesla is the outlier, everyone else is much more conservative.
A spokesperson pointed out that the owner's manual reads, “Autosteer is not designed to, and will not, steer Model S around objects partially or completely in the driving lane”.
I haven't seen the encouragement to not pay attention. In some ad on TV, maybe it was for a new Cadillac, the driver opens a soda with both hands off the wheel, and that seemed to be as far as they were willing to go.
>I'm not a self-driving car fanboy or anything. I don't think it's just around the corner, but it's clearly going to happen /eventually/ ...
Self driving vehicles have been used at Rio Tinto's Pilbara mine for a couple of years. Sure, it's on private land and the complexity of the task is far smaller than for public roads, but it's happening now.
Self driving strikes me as a technology with a hard boolean quality threshold. Either it is as good as a human or it just doesn't work.
This is a lot like rocketry. A rocket that meets 95% of its design requirements explodes, crashes, or fails to reach the proper orbit. Its damn near 100% or failure. Too bad Elon can't see this.
I think Tesla's obsession with autopilot is misplaced. Give me a solid reliable affordable EV with good range. Autopilot would be nice but only if it works. I can wait for that. But a solid EV is something I'm actually more excited about believe it or not.
I'm sort of afraid Tesla will throw the whole game in pursuit of self driving tech.
It's not even necessarily a matter of inattention. Imagine that the autopilot was driving along just fine, and when it was just a few feet away from the parked cruiser suddenly veered towards it for whatever reason. It might be physically impossible for any human, no matter how well they are paying attention, to intervene and avoid a collision.
I'm not saying that's what happened, but it's certainly possible. The mere fact that it's possible to me casts doubt on this entire design philosophy.
> "Tesla has always been clear that Autopilot doesn't make the car impervious to all accidents, and before a driver can use Autopilot, they must accept a dialogue box which states that 'Autopilot is designed for use on highways that have a center divider and clear lane markings,'" a Tesla spokesperson said in an emailed statement.
From the picture at the top of the story, it is clear there is no divider on the stretch of road where the accident occurred.
Maybe it should detect when it is being misused, and start blaring loud noises, do its best to make the driver carsick, or simply immobilize the car.
I agree, they should publicly distance themselves from the self-driving monicker until they at least reach level 4 coverage. At level 3, this should be at best described as 'driver assistance', or maybe a more descriptive but less palatable marketing term could be 'training wheels'---you still have to drive, but if you take your eyes off the road you might not die immediately like in a 'real' car.
Cadillac Super Cruise doesn't claim you still have to be driving it. It's hands free. It watches your eyeballs to make sure you're paying attention, otherwise it drives the car. But it only does this on specifically supported roads. It also doesn't claim to be fully autonomous.
He's been promising "self-driving" for years, except the damn technology isn't ready/safe enough for what people are clearly using this from. The manufacturer puts a warning label, but the usage model/UI is clearly flawed.
His engineers know this, which is why Tesla's self-driving group has had such a high turnover rate over the last few years. Musk keeps overriding them.
> I really wish all the car manufacturers would stop with this bullshit "you still have to be driving it" self driving nonsense.
To align with your statement, I can state; "I really wish all people would stop this bullshit "auto pilot means it can drive itself and I don't have to do anything" nonsense.
I feel this is quite a critical analysis of an ambiguous product title; [1]'Telsa Autopilot (& Enhanced Autopilot)'.
I read the title; "Tesla in Autopilot mode crashes into parked Laguna Beach police cruiser"... and wondered how this article is even news..
Where, besides future plans for their vehicle product (Which, as suggested by Wikipedia - may be an independent product itself, and not an extension 'Tesla Autopilot / Enhanced Autopilot'), does Tesla's product in question indicate it's self driving? Autonomous Car? I do admit, the 'summon' feature of their vehicles makes this slightly confusing...
Additionally; what is the criteria to be 'self driving'? I assume; you are talking about 'auto pilot'..?
If so; this over-simplifies the entire scenario.
The very definition, and scope, of the title 'autopilot'/ 'automatic pilot' is far from specific. In fact; most of them are applicable only to [2]aerospace.
> Claiming that the driver is still responsible for driving the car, while using a system designed to encourage the "driver" to not pay attention is a dumb idea, and all I can see is this kind of thing leading to regulations the delay actual self driving cars.
This part of your post, I actually agree with. Having said that; I believe this is more so associated to an applied opinion of what a 'self driving' car is, and what we are seeing with these vehicles - and not because of any lack of technology advancements.
But again; I urge caution in applying your, or anyone else', opinion of what a 'self driving' car is.
FINALLY:
I really wish we could start taking some responsibility of what we agree to. If the driver, even once, must agree to a clear scope of requirements to use the product - then blatantly disregards this; how, or why, is this even news?
We all accept terms and conditions of use, and privacy policies - with almost every product we use - and don't expect to be shown them every time we use that same product. I'm one of the few that read these documents - and more often than not; refuse to use the product. I made the choice not to agree.
There's other comments to this article that clearly indicate they believe a 'once off' popup is questionable at best - but I really don't understand this. Without a doubt; having to push a button to enable this Telsa/Enhanced 'Autopilot' feature, then scroll through a dialog to the end (Yes, okay, exaggerated) - and then click 'ok' would be a huge hazard.
I don't believe the user should be prompted with the same notice every time they use a product.
However; I do not necessarily believe prompted once is the right answer.
What if someone else uses the vehicle? Or you sell it?
We have seen time and time again that if corporations can wash their hands off of something, they do so. For the past few decades, and perhaps for much longer, the operating principle of corporations have been "if it's not illegal, it's moral."
As an example, rationally speaking credit bureaus and financial institutions are 100% responsible if they give someone a loan or open an account for them under your name. But they were allowed by the legal system to put the onus on the victims of the fraud to prove they are victims not perpetrators of it, and they called it "identity theft" instead of "bank fraud," and now we have victims who never did anything wrong or could have done anything differently battling CRAs and FIs for years to get out of a situation they should have never been put into in the first place [0][1].
If the laws allow companies to sell Level 5 autonomous vehicle while keeping the passenger (because it's not really a driver anymore) responsible for all legal matters, they would do so, and they would lobby against any law which puts the burden correctly where it belongs. And we would be forced to put our financial, mental, and legal well-being on the line just to get from point A to point B because it will be impossible to avoid autonomous tech then like it is impossible to avoid credit bureaus now.
Assuming that every major AP collision, it is pretty amusing that the latest non-fatal ones have all coincidentally involved emergency vehicles. I imagine it must be related to how such vehicles have the right to park just about anywhere, including on roadways in which drivers do not expect parked vehicles.
One way to avoid these types of crashes IMO is anomaly detection. It's quite simple to do anomaly detection in pixels using modern deep pixel prediction nets like PredNet. In my experiments you get a few seconds lead time on something like a car cutting you off (the car starts to head out of the lane before actually crossing it for example). This allows alerting the driver, and with a full windshield HUD you could even highlight the anomalous pixels on the windshield. The nice thing about this is that it can be trained in an unsupervised manner on all the available data. Some important details are to find anomalies in object bounding boxes, using something like Tensorflow's object detection pretrained net. Otherwise buildings with lots of striations would light up the anomaly detector. Also, you should detect anomalies in a human colorspace like CIELAB so that white cars (#fff) are not artificially weighted as more anomalous.
Finally, you could use this as input to a planner like Model Predictive Control where a higher cost is incurred for approaching anomalous objects.
Totaled? Maybe insurance totaled, but that police cruiser looks relatively in fact from the picture. The driver's side door is open and and it doesn't look like it was crumpled in any way in that zone. The rear driver's side passenger door looks like it took more damage, and the entire frame of the cruiser still looks like it's in relatively okay condition. The Tesla is in the same shape.
I'm not trying to downplay the impacts of the driver or Autopilot here, but even if a police officer was in the cruiser, it doesn't look like it would have been as catastrophic as the article implies.
> "Tesla has always been clear that Autopilot doesn't make the car impervious to all accidents, and before a driver can use Autopilot, they must accept a dialogue box which states that 'Autopilot is designed for use on highways that have a center divider and clear lane markings,'" a Tesla spokesperson said in an emailed statement.
Does this dialog box appear every time the car is started or is it a one time thing, just like terms of services?
Basically this is a well-known issue in aviation. As automation gets better and better, humans rely on it more and more and get less and less able to handle even minute failures in it. Additionally humans are absolutely terrible at being able to be dropped into a complex situation and have to make immediate rational decisions about it - thus the failure mode of "in case of error, tell human to handle it" is a bad idea if time between "tell human about it" and "crash into things" is under ten seconds or so. This has been the subject of many NASA studies and NTSB reports and the above article does a good job presenting this info in a form a layman can understand.
There are currently no known easy solutions, sadly.
[+] [-] abalone|7 years ago|reply
I can see the benefit to Tesla and future Tesla customers of essentially crowdsourced fleet learning. But what is the benefit right now if you strictly follow the rule of remaining alert enough to intervene at a second's notice?
In previous threads the best explanation I got from Tesla owners was that it frees you from the "details of physically driving" so you "can now supervise instead."[1] That just seems suspect. Supervising autopilot to the degree required to correct sudden mistakes seems, to me, to be probably very close to the mental load of the details of steering yourself.
[1] https://news.ycombinator.com/item?id=17151116
[+] [-] iamtew|7 years ago|reply
> Unfortunately, our website is currently unavailable in most European countries. We are engaged on the issue and committed to looking at options that support our full range of digital offerings to the EU market. We continue to identify technical compliance solutions that will provide all readers with our award-winning journalism.
On the other hand, I'm wondering how much I want to read an article from a website where they must track me to when I just want to read something...
[+] [-] amyjess|7 years ago|reply
http://archive.is/http://www.latimes.com/local/lanow/la-me-l...
[+] [-] izacus|7 years ago|reply
[+] [-] reacharavindh|7 years ago|reply
[+] [-] blocked_again|7 years ago|reply
[+] [-] TangoTrotFox|7 years ago|reply
And then sharing, trading (to expand the profile), and selling it to other companies? Really nasty stuff.
[+] [-] jijojv|7 years ago|reply
https://www.google.com/search?q=Tesla+in+Autopilot+mode+cras...
[+] [-] lagadu|7 years ago|reply
[+] [-] sireat|7 years ago|reply
[+] [-] sschueller|7 years ago|reply
[+] [-] perlgeek|7 years ago|reply
[deleted]
[+] [-] pavelludiq|7 years ago|reply
[+] [-] confiscate|7 years ago|reply
Every time I ask this question, the response is "radar as a technology has too much noise, not accurate enough to detect stationary objects".
But it DOES detect stationary objects. Otherwise no one would even turn on Tesla AP at all.
Why is it so hard to detect stationary objects in front of you?
[+] [-] cherioo|7 years ago|reply
Even among non-Tesla cars, whose manufacturers don't try to "mislead", 71 percent of people believe automatic emergency braking can avoid all crashes [0]. Is this percentage higher or lower for Tesla, and is that difference warranted?
[0] https://newsroom.aaa.com/2016/08/hit-brakes-not-self-braking...
[1] Quote from above "When traveling at 45 mph and approaching a static vehicle, a scenario designed to push systems beyond the stated limitations, the systems designed to prevent crashes reduced speeds by 74 percent overall and avoided crashes in 40 percent of scenarios. In contrast, systems designed to lessen crash severity were only able to reduce vehicle speed by 9 percent overall."
[+] [-] mikeash|7 years ago|reply
Does Autopilot actually make the problem worse, or is it just more newsworthy?
[+] [-] Animats|7 years ago|reply
Of course, as usual, their obstacle detection failed to detect a stationary obstacle that didn't look like the rear end of a car in the same lane. We need a minimum standard for a vehicle which takes automated control of steering and braking. It must reliably stop for obstacles.
[1] https://goo.gl/maps/BjqTZoD5Yws
[+] [-] RobLach|7 years ago|reply
Today I read Tesla is getting into the insurance business because premiums are getting out of hand. It makes some sense since internally they’ll have more data they can use to deny claims.
It’s interesting how reality plays out.
https://electrek.co/2018/05/29/tesla-insuremytesla-insurance...
[+] [-] jread|7 years ago|reply
[+] [-] olliej|7 years ago|reply
Claiming that the driver is still responsible for driving the car, while using a system designed to encourage the "driver" to not pay attention is a dumb idea, and all I can see is this kind of thing leading to regulations the delay actual self driving cars.
I'm not a self-driving car fanboy or anything. I don't think it's just around the corner, but it's clearly going to happen /eventually/ and I can't imagine it being more dangerous than regular drivers. But this kind of "self driving but not" features feel like the sort of thing that calls to government agencies for regulation, and sufficiently "dumb" mistakes from these systems seems like the sort of thing that triggers a reactive over-regulation.
[+] [-] jonknee|7 years ago|reply
[+] [-] bazooka2th|7 years ago|reply
A spokesperson pointed out that the owner's manual reads, “Autosteer is not designed to, and will not, steer Model S around objects partially or completely in the driving lane”.
I haven't seen the encouragement to not pay attention. In some ad on TV, maybe it was for a new Cadillac, the driver opens a soda with both hands off the wheel, and that seemed to be as far as they were willing to go.
[+] [-] rdiddly|7 years ago|reply
[+] [-] foobar1962|7 years ago|reply
Self driving vehicles have been used at Rio Tinto's Pilbara mine for a couple of years. Sure, it's on private land and the complexity of the task is far smaller than for public roads, but it's happening now.
http://www.abc.net.au/news/2015-10-18/rio-tinto-opens-worlds...
[+] [-] api|7 years ago|reply
This is a lot like rocketry. A rocket that meets 95% of its design requirements explodes, crashes, or fails to reach the proper orbit. Its damn near 100% or failure. Too bad Elon can't see this.
I think Tesla's obsession with autopilot is misplaced. Give me a solid reliable affordable EV with good range. Autopilot would be nice but only if it works. I can wait for that. But a solid EV is something I'm actually more excited about believe it or not.
I'm sort of afraid Tesla will throw the whole game in pursuit of self driving tech.
[+] [-] danepowell|7 years ago|reply
I'm not saying that's what happened, but it's certainly possible. The mere fact that it's possible to me casts doubt on this entire design philosophy.
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] hedora|7 years ago|reply
> "Tesla has always been clear that Autopilot doesn't make the car impervious to all accidents, and before a driver can use Autopilot, they must accept a dialogue box which states that 'Autopilot is designed for use on highways that have a center divider and clear lane markings,'" a Tesla spokesperson said in an emailed statement.
From the picture at the top of the story, it is clear there is no divider on the stretch of road where the accident occurred.
Maybe it should detect when it is being misused, and start blaring loud noises, do its best to make the driver carsick, or simply immobilize the car.
[+] [-] modzu|7 years ago|reply
uh, isn't a self-driving car you have to drive an oxymoron?? does anybody understand words??
[+] [-] true_religion|7 years ago|reply
[+] [-] acct1771|7 years ago|reply
[+] [-] cmurf|7 years ago|reply
[+] [-] AceJohnny2|7 years ago|reply
He's been promising "self-driving" for years, except the damn technology isn't ready/safe enough for what people are clearly using this from. The manufacturer puts a warning label, but the usage model/UI is clearly flawed.
His engineers know this, which is why Tesla's self-driving group has had such a high turnover rate over the last few years. Musk keeps overriding them.
[+] [-] paulcole|7 years ago|reply
https://www.tesla.com/autopilot
[+] [-] givinguflac|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] gU9x3u8XmQNG|7 years ago|reply
To align with your statement, I can state; "I really wish all people would stop this bullshit "auto pilot means it can drive itself and I don't have to do anything" nonsense.
I feel this is quite a critical analysis of an ambiguous product title; [1]'Telsa Autopilot (& Enhanced Autopilot)'.
I read the title; "Tesla in Autopilot mode crashes into parked Laguna Beach police cruiser"... and wondered how this article is even news..
Where, besides future plans for their vehicle product (Which, as suggested by Wikipedia - may be an independent product itself, and not an extension 'Tesla Autopilot / Enhanced Autopilot'), does Tesla's product in question indicate it's self driving? Autonomous Car? I do admit, the 'summon' feature of their vehicles makes this slightly confusing...
Additionally; what is the criteria to be 'self driving'? I assume; you are talking about 'auto pilot'..?
If so; this over-simplifies the entire scenario.
The very definition, and scope, of the title 'autopilot'/ 'automatic pilot' is far from specific. In fact; most of them are applicable only to [2]aerospace.
> Claiming that the driver is still responsible for driving the car, while using a system designed to encourage the "driver" to not pay attention is a dumb idea, and all I can see is this kind of thing leading to regulations the delay actual self driving cars.
This part of your post, I actually agree with. Having said that; I believe this is more so associated to an applied opinion of what a 'self driving' car is, and what we are seeing with these vehicles - and not because of any lack of technology advancements.
But again; I urge caution in applying your, or anyone else', opinion of what a 'self driving' car is.
FINALLY: I really wish we could start taking some responsibility of what we agree to. If the driver, even once, must agree to a clear scope of requirements to use the product - then blatantly disregards this; how, or why, is this even news?
We all accept terms and conditions of use, and privacy policies - with almost every product we use - and don't expect to be shown them every time we use that same product. I'm one of the few that read these documents - and more often than not; refuse to use the product. I made the choice not to agree.
There's other comments to this article that clearly indicate they believe a 'once off' popup is questionable at best - but I really don't understand this. Without a doubt; having to push a button to enable this Telsa/Enhanced 'Autopilot' feature, then scroll through a dialog to the end (Yes, okay, exaggerated) - and then click 'ok' would be a huge hazard.
I don't believe the user should be prompted with the same notice every time they use a product.
However; I do not necessarily believe prompted once is the right answer.
What if someone else uses the vehicle? Or you sell it?
I do not pretend to have the answers...
--
Disclaimer; I know Wikipedia is not gospel.
[1] https://en.wikipedia.org/wiki/Tesla_Autopilot [1.1] https://www.tesla.com/blog/dual-motor-model-s-and-autopilot [1.2] https://www.tesla.com/autopilot [2] https://en.wikipedia.org/wiki/Autopilot
[+] [-] smnrchrds|7 years ago|reply
As an example, rationally speaking credit bureaus and financial institutions are 100% responsible if they give someone a loan or open an account for them under your name. But they were allowed by the legal system to put the onus on the victims of the fraud to prove they are victims not perpetrators of it, and they called it "identity theft" instead of "bank fraud," and now we have victims who never did anything wrong or could have done anything differently battling CRAs and FIs for years to get out of a situation they should have never been put into in the first place [0][1].
If the laws allow companies to sell Level 5 autonomous vehicle while keeping the passenger (because it's not really a driver anymore) responsible for all legal matters, they would do so, and they would lobby against any law which puts the burden correctly where it belongs. And we would be forced to put our financial, mental, and legal well-being on the line just to get from point A to point B because it will be impossible to avoid autonomous tech then like it is impossible to avoid credit bureaus now.
[0] Like this case: http://www.cbc.ca/news/canada/manitoba/credit-report-error-f...
[1] Obligatory Mitchell and Webb: https://www.youtube.com/watch?v=-c57WKxeELY
[+] [-] omarforgotpwd|7 years ago|reply
[+] [-] iamleppert|7 years ago|reply
[+] [-] danso|7 years ago|reply
[+] [-] cr4zy|7 years ago|reply
Finally, you could use this as input to a planner like Model Predictive Control where a higher cost is incurred for approaching anomalous objects.
[+] [-] theCricketer|7 years ago|reply
This video (https://youtu.be/wsixsRI-Sz4?t=1h18m28s) shows Elon Musk, two years ago, saying the following:
"I basically consider autonomous driving to be a solved problem".
"A Model S and Model X can drive with greater safety than a person, already. Right now."
"We are less than two years away from complete autonomy".
[+] [-] Shank|7 years ago|reply
I'm not trying to downplay the impacts of the driver or Autopilot here, but even if a police officer was in the cruiser, it doesn't look like it would have been as catastrophic as the article implies.
[+] [-] jfim|7 years ago|reply
> "Tesla has always been clear that Autopilot doesn't make the car impervious to all accidents, and before a driver can use Autopilot, they must accept a dialogue box which states that 'Autopilot is designed for use on highways that have a center divider and clear lane markings,'" a Tesla spokesperson said in an emailed statement.
Does this dialog box appear every time the car is started or is it a one time thing, just like terms of services?
[+] [-] oliv__|7 years ago|reply
[+] [-] pmontra|7 years ago|reply
[+] [-] dmitrygr|7 years ago|reply
Basically this is a well-known issue in aviation. As automation gets better and better, humans rely on it more and more and get less and less able to handle even minute failures in it. Additionally humans are absolutely terrible at being able to be dropped into a complex situation and have to make immediate rational decisions about it - thus the failure mode of "in case of error, tell human to handle it" is a bad idea if time between "tell human about it" and "crash into things" is under ten seconds or so. This has been the subject of many NASA studies and NTSB reports and the above article does a good job presenting this info in a form a layman can understand.
There are currently no known easy solutions, sadly.
[+] [-] fijal|7 years ago|reply
[+] [-] ucaetano|7 years ago|reply
Tesla really is sticking it to the man!