I really don't understand how a company can get away with using its customers as QA when they sell products that can kill people. It's one thing if it's a website for throwing virtual sheep at another; nobody really cares if your Facebook feed omits an update or two. It's another thing if a software bug means that you die. Or at least it should be - I'm amazed at the number of people who seemingly don't care.
I also wonder what this is doing to Tesla's brand reputation. I was just in the market for solar panels and Tesla's offering (both the panels and the solar roof) was very, very attractive on price & aesthetics, but their reliability record with their cars made me think twice about buying a 25+ year product that keeps my home powered and dry.
> I really don't understand how a company can get away with using its customers as QA when they sell products that can kill people
There's a term for it: "normalization of deviance"[1]. A highly risky thing is done and doesn't fail as early as thought. The people doing it assume that means the risk is lower than they thought, but it's not, and they just happened to get lucky.
I'm sure Elon knows this, so probably believes their tech can advance enough before their luck turns. I personally think he's deadly wrong on this, but it seems like a lot of Silicon Valley Tesla owners who sleep or read their phone while on Autopilot accept it at face value.
Obviously I have no idea what's in their heads, but a plausible train of thought goes like:
Legal : If you sell it as a drive assist and not a driver replacement, you're off the hook, with a high probability. It's on the drivers if they get complacent. There may even be precedents with similar products for trucks.
Moral: pushing this now will very likely kill some people that would otherwise be alive (drivers or pedestrians). But it's very unlikely to be a carnage - I'd dare say even impossible. On the other hand, moving the technology forward and directly into widespread commercial use will save people, and likely even... what's the equivalent word for a "carnage" but where people are saved? Miracle.
Branding: well, there you have a real risk. Public opinion is fickle, and a particularly gruesome accident might do some real damage, both to Tesla and to self driving technologies in general. But then, they do have a high risk high reward mentality, and the chance is good that the first bad accident (statistically inevitable) will come after the advantages are proven and well established.
> ... using its customers as QA when they sell products that can kill people
They aren't just using their own customers, they are using all of us.
The cars won't be just crashing into inanimate objects, but school buses and ambulances. The QA is being done to all of us and our loved ones near their cars.
Hate to say it but I think you’re in the minority with that opinion. Reaction from most people has been “woah”.
Also, there is no moving fast and breaking things here in real terms. Drivers are and have to pay attention while driving, and Tesla is strictly monitoring this with beta. You also specifically have to opt-in to beta software. 99% of Tesla owners do not have this functionality.
Thousands of people die each year because of stupid mistakes by human drivers. From a utilitarian perspective, pushing for self driving as fast as possible - in a manner that some might even judge to be reckless - is a net positive.
To be a safe driver you have to be able to predict what the people around you are going to do. Simply reacting is not enough. I don't think that it's a coincidence that the guy selling self-driving cars is the same guy who goes around exaggerating the current abilities of AI.
I too am shocked that anyone is ok with this, given the auto industry's history of fighting against safety and transparency.
This is exactly why regulations such as professional engineering licenses were put into place. You don’t want someone with that mentality building a bridge.
The underlying motivations explain all: this isn't about improving automobile safety anymore (if it ever was): it's about recognizing deferred revenue from this feature (which has been sold for years) and trying to keep up, from a marketing perspective, with Waymo and Cruise. That's why they're not using trained safety drivers, but fanbois.
Waymo is at least as safe as the average human driver, while Tesla has unleashed literally worse-than-drunk drivers upon us all. Heaven help us.
The ethics of this are heavily influenced by how much you believe a) this rollout will result in a net decrease in automobile accidents and b) how much we should assume drivers will both be informed and intervene if the system missteps. If you believe both are unlikely, or you believe engineers should have a "do no harm" based ethics similar to medicine, this would be unacceptable. But if you believe both are highly and/or think engineering ethics should focus on harm minimization, this makes sense to do and under certain assumptions it becomes unethical to not roll it out.
As a concrete example: if you assume that 99% of accidents that would be caused by the system will be prevented by driver interventions, and you think overall the system will reduce the likelihood of an accident by 20%, the question you have to ask yourself is: if 1 in 5 people harmed over the next 6 months we could have saved, should we avoid doing so and wait until only 1 in 200 will not be harmed, in part due to their own negligence? As you can see, the assumed probabilities matter a lot, so I'd be curious how one can come up with good projections. The only case where these probabilities don't matter is if you believe that engineers ought to never create harm where none would have existed otherwise, but when it comes to self driving cars, this is an impossibility since it assumes perfect autonomy.
In general, if you buy into the theory that "human + AI" is always going to be smarter than just AI, then arguably even in a world with great autonomous driving, the best system will be one that has both an AI and a human. And that is what we have here, so it's more a question of if AI quality is sufficient or if this is just a bad idea in general given the natural tendencies of humans to be poor co-pilots.
> I really don't understand how a company can get away with using its customers as QA when they sell products that can kill people.
They already claim that drivers must pay attention to the road in order to prevent accidents and this has been the status-quo in the automotive world for a century or so. Selling products that can kill if misused is not remarkable.
This could turn ugly if it's shown that enabling self-driving results in much more accidents but there is no data to support this.
> get away with [...] products that can kill people
if your moral premise is saving lives, your conclusion does not follow. vehicles are a leading cause of death and injury, so this omelette is worth breaking a few eggs, no?
AV has the potential to save a lot of lives. of course, there will be effects in other markets like insurance, new car sales, road repair, junk/salvage yards, tow trucks, etc. what the net economic effect will be is up for debate.
Well, I own a Tesla Model X when I'm with my family I don't engage self driving car mode, nobody force bus to do it. And when I do it is normal situations like 101 Highway
The only stats I have seen on this show that the autopilot reduces the rate of collisions. So it's not really a product that kills people, but the opposite. The stats came from Tesla, though, so I can't rule out the possibility that they are incorrect.
This comment strikes me as completely disingenuous.
(1) None of the companies involved have ever had the motto "move fast & break things." Zuckerberg created that philosophy to Facebook. It's foolish to apply it an entire industry.
(2) They are no more "using its customers as QA" than pharmaceutical companies do when gradually ramping up trials that also kill people. There has been heavy testing before this point, and it seems entirely reasonable to scale up testing before a general release.
From the article: "These YouTube videos underscored how important it is for drivers to actively supervise Tesla's new software. Over the course of three hours, the drivers took control more than a dozen times, including at least two cases when the car seemed to be on the verge of crashing into another vehicle."
In a way, it's good that the Tesla system sucks so badly. If they had a disconnect rate of one per month, drivers would trust the thing.
The other guys, disengagements per 1,000 self-driven miles in California:
Waymo, 0.076, or one per 13,000 miles.
Cruise, 0.082, or one per 12,000 miles.
Not clear how many of those would have resulted in a crash, as opposed to just stopping. Probably not many, the California autonomous DMV reports indicate. US humans have a crash rate of one per 508,000 miles driven.
Anyway, Tesla does not have "full self driving". They have slightly better Level 2. Not even Level 3. You'd think that by now they'd have hands-off freeway driving totally automated to a better than human level of safety. But no.
It’s not as simple as “the driver can take over if they need to.” In many cases it takes a human too long to recognize the system doing something dangerous and to intervene. This is going to end badly.
Indeed, recognizing the problem takes vital time and taking over to execute a maneuver is way harder than executing the same maneuver while you're already in control.
> In another video, Brandon's Tesla was making a left turn but wasn't turning sharply enough to avoid hitting a car parked on the opposite side of the cross street. "Oh Jeeeesus," Brandon said as he grabbed the steering wheel and jerked it to the left. "Oh my God," Brandon's passenger added.
Maybe it's just me, but this screams clickbait and low effort. The overall article is fine, but I nearly stopped reading after this. Is this kind of writing really necessary nowadays?
I think given the high stakes of killing yourself or other people, we should be brutally unforgiving in highlighting major failures of killer technology.
I agree, it is clearly a clickbait title with the keyword of "reaction" to imitate that type of youtube video title. Then again it might be tongue-in-cheek, but since it fits the content perfectly, it's always hard to tell.
Terrifying scenarios are described in the article. This is obviously nothing more than (very) advanced lane assistance and absolutely requires fully observant drivers who might have to take over the wheel in the fraction of a second.
Humans aren't designed to monitor something that requires no action for a long time and then suddenly requires attention, nor are we good at taking over control of something at the last moment. You're not in the "flow" at that point.
The article is correct that expecting average drivers to do this without training is a high risk move. I've given flight instruction, taking over a landing 20ft off the ground is way harder than landing a plane yourself. And that's with a lot of training, not just an average driver being put in a place to supervise Tesla autopilot with no training at all.
> this is obviously nothing more than (very) advanced lane assistance
Is that really obvious? It looks like the system is aware of much more than just lanes. A lane assist system doesn't stop for red lights or stop signs, or make left turns into a side road after waiting for oncoming traffic to clear.
Tesla should follow Waymo's lead and release statistics on collisions while self-driving, and also the number of simulated counterfactual collisions avoided by the human taking over [1]. Based on the multiple incidents from the individual drivers cited in the article, it sounds like the numbers wouldn't be very good.
You don't have to read Tim Lee's writing to know how the latest FSD performs. There are lots of videos on YouTube, and new ones posted every day. My personal opinion is that this is a quantum leap over the previous implementation. Much respect to the engineers at Tesla for this amazing accomplishment.
> Tesla ... says it's not intended for fully autonomous operation. Drivers are expected to keep their eyes on the road and hands on the wheel at all times.
If tesla rolls out a product where you have to pay close attention and be ready to take over and calls it full self driving, that seems like a massive case of fraud. I'd be pissed if I'd bought the FSD package.
By now it should be obvious—and this is purely descriptive of what I see playing out, not prescriptive—that self-driving tech will have to clear a higher bar than just statistically better in order to become mainstream.
Not only because anecdotes carry more weight than statistics in the media—as this story illustrates—but because people are much less afraid of bad things happening, as long as they're in control when it happens.
Also, I suspect everything associated with fuel-burning cars is by now irrevocably linked with cultural ideals of individualism and freedom. Resistance to EVs and self-driving tech will continually materialize, as if out of thin air, regardless of how well documented the benefits are. It's going to be an uphill battle for the foreseeable future.
What will it take to stop them? I believe it's a settled matter that their self driving is fundamentally flawed due to relying on ML and cameras instead of lidar. Will it take people dying for the government to step in? Lawsuits? I don't think our politicians are well versed enough in how this works and the level of risk letting this kind of thing happen.
I think I get less interested in self-driving cars the closer we seem to get to them. It honestly just stresses me out. Driving is something I enjoy doing. I’m not sure I will ever feel comfortable taking my hands off the wheel. This has little to do with how well the tech works in tests or anecdotally; I think I just don’t want this.
Tesla is very "thrifty" with their FSD software. Other companies need to hire safety drivers, but Tesla just foists that labor onto their paying customers, and any liability to boot.
Either the human should never have to intervene (can sleep) or the human should never be allowed to take their eyes of the road.
Any level between that is dangerous, irresponsible, and should be banned from use on public roads.
Which leads to an interesting question or two: a) why isn’t it? b) what will happen to Tesla and Uber once it’s clear that “Unsupervised driving” will remain a decade away for at least another decade?
Each year ~1.3 million people are killed on the road (worldwide). (interesting to compare to covid deaths).
Giving people the option to sacrifice their lives to contribute to solving this problem is an amazing thing when you think about how many million people the capability could save over the next century.
These are the risk taking explorers of the modern era.
Without releasing data, Tesla is forcing analysts to focus on anecdotes, which suffer from all the usual problems like selection bias. What we need to know:
- How often do drivers intervene?
- How often do drivers fail to intervene and the system causes an accident?
- How often does the system take an action which has a high likelihood of having prevented an accident?
- How often does the system do so when it seems likely a human driver would have failed to do so?
- Integrated together, what's the expected dynamics of these probabilities and their net impact insofar as the system being releases more widely creates training data to help improve them more quickly over time?
It could very well turn out to be the case that this system is purely positive, strongly net positive, or neutral in harm reduction. The question then is, given that, what ethics should inform its release: is putting the stress on the driver sufficient if it say, saves 1000 lives in the next six months and will harm noone in exchange, other than some drivers having to intervene and have moments of stress? What if it will save 1000 and 10 people will be harmed by failing to intervene? What if it's an even swap, harming people who fail to intervene in exchange for saving people who would have inevitably been lost to accidents they couldn't have prevented?
Saying "this is wrong to do" based upon anecdotes is dumb analysis. Saying "this is wrong to do" based upon an absolutist form of ethics is fair, but it also means you reject that we should be trying to solve self driving. If you think it's wrong to do and are not doing either, you ought to articulate what scenario in terms of data would justify the action vs not, even if that data is unknown right now.
[+] [-] nostrademons|5 years ago|reply
I really don't understand how a company can get away with using its customers as QA when they sell products that can kill people. It's one thing if it's a website for throwing virtual sheep at another; nobody really cares if your Facebook feed omits an update or two. It's another thing if a software bug means that you die. Or at least it should be - I'm amazed at the number of people who seemingly don't care.
I also wonder what this is doing to Tesla's brand reputation. I was just in the market for solar panels and Tesla's offering (both the panels and the solar roof) was very, very attractive on price & aesthetics, but their reliability record with their cars made me think twice about buying a 25+ year product that keeps my home powered and dry.
[+] [-] trimbo|5 years ago|reply
There's a term for it: "normalization of deviance"[1]. A highly risky thing is done and doesn't fail as early as thought. The people doing it assume that means the risk is lower than they thought, but it's not, and they just happened to get lucky.
I'm sure Elon knows this, so probably believes their tech can advance enough before their luck turns. I personally think he's deadly wrong on this, but it seems like a lot of Silicon Valley Tesla owners who sleep or read their phone while on Autopilot accept it at face value.
[1] - https://en.wikipedia.org/wiki/Normalization_of_deviance
[+] [-] radu_floricica|5 years ago|reply
Legal : If you sell it as a drive assist and not a driver replacement, you're off the hook, with a high probability. It's on the drivers if they get complacent. There may even be precedents with similar products for trucks.
Moral: pushing this now will very likely kill some people that would otherwise be alive (drivers or pedestrians). But it's very unlikely to be a carnage - I'd dare say even impossible. On the other hand, moving the technology forward and directly into widespread commercial use will save people, and likely even... what's the equivalent word for a "carnage" but where people are saved? Miracle.
Branding: well, there you have a real risk. Public opinion is fickle, and a particularly gruesome accident might do some real damage, both to Tesla and to self driving technologies in general. But then, they do have a high risk high reward mentality, and the chance is good that the first bad accident (statistically inevitable) will come after the advantages are proven and well established.
[+] [-] Balgair|5 years ago|reply
They aren't just using their own customers, they are using all of us.
The cars won't be just crashing into inanimate objects, but school buses and ambulances. The QA is being done to all of us and our loved ones near their cars.
[+] [-] ericmay|5 years ago|reply
Also, there is no moving fast and breaking things here in real terms. Drivers are and have to pay attention while driving, and Tesla is strictly monitoring this with beta. You also specifically have to opt-in to beta software. 99% of Tesla owners do not have this functionality.
[+] [-] DC-3|5 years ago|reply
[+] [-] caditinpiscinam|5 years ago|reply
I too am shocked that anyone is ok with this, given the auto industry's history of fighting against safety and transparency.
[+] [-] dasudasu|5 years ago|reply
[+] [-] jhoechtl|5 years ago|reply
Stuff like that is only possible in the US. Would be unthinkable in the EU. Maybe in UJK but you know, they are no longer ...
[+] [-] new_realist|5 years ago|reply
Waymo is at least as safe as the average human driver, while Tesla has unleashed literally worse-than-drunk drivers upon us all. Heaven help us.
[+] [-] gfodor|5 years ago|reply
As a concrete example: if you assume that 99% of accidents that would be caused by the system will be prevented by driver interventions, and you think overall the system will reduce the likelihood of an accident by 20%, the question you have to ask yourself is: if 1 in 5 people harmed over the next 6 months we could have saved, should we avoid doing so and wait until only 1 in 200 will not be harmed, in part due to their own negligence? As you can see, the assumed probabilities matter a lot, so I'd be curious how one can come up with good projections. The only case where these probabilities don't matter is if you believe that engineers ought to never create harm where none would have existed otherwise, but when it comes to self driving cars, this is an impossibility since it assumes perfect autonomy.
In general, if you buy into the theory that "human + AI" is always going to be smarter than just AI, then arguably even in a world with great autonomous driving, the best system will be one that has both an AI and a human. And that is what we have here, so it's more a question of if AI quality is sufficient or if this is just a bad idea in general given the natural tendencies of humans to be poor co-pilots.
[+] [-] kanox|5 years ago|reply
They already claim that drivers must pay attention to the road in order to prevent accidents and this has been the status-quo in the automotive world for a century or so. Selling products that can kill if misused is not remarkable.
This could turn ugly if it's shown that enabling self-driving results in much more accidents but there is no data to support this.
[+] [-] kryogen1c|5 years ago|reply
if your moral premise is saving lives, your conclusion does not follow. vehicles are a leading cause of death and injury, so this omelette is worth breaking a few eggs, no?
AV has the potential to save a lot of lives. of course, there will be effects in other markets like insurance, new car sales, road repair, junk/salvage yards, tow trucks, etc. what the net economic effect will be is up for debate.
[+] [-] robertlagrant|5 years ago|reply
All car manufacturers sell products that can kill people.
[+] [-] spicyramen|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] travisoneill1|5 years ago|reply
[+] [-] vaccinator|5 years ago|reply
does it even become profitable before the 25 year mark in the current market?
[+] [-] wdr1|5 years ago|reply
(1) None of the companies involved have ever had the motto "move fast & break things." Zuckerberg created that philosophy to Facebook. It's foolish to apply it an entire industry.
(2) They are no more "using its customers as QA" than pharmaceutical companies do when gradually ramping up trials that also kill people. There has been heavy testing before this point, and it seems entirely reasonable to scale up testing before a general release.
[+] [-] Animats|5 years ago|reply
In a way, it's good that the Tesla system sucks so badly. If they had a disconnect rate of one per month, drivers would trust the thing.
The other guys, disengagements per 1,000 self-driven miles in California:
Not clear how many of those would have resulted in a crash, as opposed to just stopping. Probably not many, the California autonomous DMV reports indicate. US humans have a crash rate of one per 508,000 miles driven.Anyway, Tesla does not have "full self driving". They have slightly better Level 2. Not even Level 3. You'd think that by now they'd have hands-off freeway driving totally automated to a better than human level of safety. But no.
[+] [-] rkochman|5 years ago|reply
[+] [-] t0mas88|5 years ago|reply
[+] [-] crabcakes010|5 years ago|reply
[+] [-] Sebb767|5 years ago|reply
> In another video, Brandon's Tesla was making a left turn but wasn't turning sharply enough to avoid hitting a car parked on the opposite side of the cross street. "Oh Jeeeesus," Brandon said as he grabbed the steering wheel and jerked it to the left. "Oh my God," Brandon's passenger added.
Maybe it's just me, but this screams clickbait and low effort. The overall article is fine, but I nearly stopped reading after this. Is this kind of writing really necessary nowadays?
[+] [-] Waterluvian|5 years ago|reply
[+] [-] pimlottc|5 years ago|reply
[+] [-] izacus|5 years ago|reply
[+] [-] dmix|5 years ago|reply
Not on HN usually.
[+] [-] 205guy|5 years ago|reply
[+] [-] mlindner|5 years ago|reply
[+] [-] arkitaip|5 years ago|reply
[+] [-] t0mas88|5 years ago|reply
The article is correct that expecting average drivers to do this without training is a high risk move. I've given flight instruction, taking over a landing 20ft off the ground is way harder than landing a plane yourself. And that's with a lot of training, not just an average driver being put in a place to supervise Tesla autopilot with no training at all.
[+] [-] wcoenen|5 years ago|reply
Is that really obvious? It looks like the system is aware of much more than just lanes. A lane assist system doesn't stop for red lights or stop signs, or make left turns into a side road after waiting for oncoming traffic to clear.
[+] [-] helsinkiandrew|5 years ago|reply
Given that there’s regular stories of people playing phone games and crashing whilst using the current lane assist tech, are we safe?
‘Real-world testing was needed to uncover what would be a "long tail" of problems, he added.’
https://www.bbc.com/news/technology-53349313
[+] [-] Strilanc|5 years ago|reply
1: https://storage.googleapis.com/sdc-prod/v1/safety-report/Way...
Disclaimer: I work for Google but am not involved with Waymo. Opinions my own.
[+] [-] pmcollins|5 years ago|reply
https://www.youtube.com/results?search_query=tesla+fsd+beta
[+] [-] 6gvONxR4sf7o|5 years ago|reply
> Tesla ... says it's not intended for fully autonomous operation. Drivers are expected to keep their eyes on the road and hands on the wheel at all times.
If tesla rolls out a product where you have to pay close attention and be ready to take over and calls it full self driving, that seems like a massive case of fraud. I'd be pissed if I'd bought the FSD package.
[+] [-] _greim_|5 years ago|reply
Not only because anecdotes carry more weight than statistics in the media—as this story illustrates—but because people are much less afraid of bad things happening, as long as they're in control when it happens.
Also, I suspect everything associated with fuel-burning cars is by now irrevocably linked with cultural ideals of individualism and freedom. Resistance to EVs and self-driving tech will continually materialize, as if out of thin air, regardless of how well documented the benefits are. It's going to be an uphill battle for the foreseeable future.
[+] [-] candiddevmike|5 years ago|reply
[+] [-] cmckn|5 years ago|reply
[+] [-] kryptiskt|5 years ago|reply
[+] [-] scollet|5 years ago|reply
[+] [-] WhompingWindows|5 years ago|reply
[+] [-] mlindner|5 years ago|reply
[+] [-] alkonaut|5 years ago|reply
Any level between that is dangerous, irresponsible, and should be banned from use on public roads.
Which leads to an interesting question or two: a) why isn’t it? b) what will happen to Tesla and Uber once it’s clear that “Unsupervised driving” will remain a decade away for at least another decade?
[+] [-] kthejoker2|5 years ago|reply
https://amp.detroitnews.com/amp/26312107
My key takeaway is regulation of advanced technology lags waaaaaaay behind the disruption caused by the technology.
[+] [-] mensetmanusman|5 years ago|reply
Giving people the option to sacrifice their lives to contribute to solving this problem is an amazing thing when you think about how many million people the capability could save over the next century.
These are the risk taking explorers of the modern era.
[+] [-] slaw|5 years ago|reply
[1] https://en.wikipedia.org/wiki/Mockup
[+] [-] FartyMcFarter|5 years ago|reply
https://youtu.be/RN5Qoei7v1k?t=2218
It can't even figure out which traffic lights apply to it.
[+] [-] gfodor|5 years ago|reply
- How often do drivers intervene?
- How often do drivers fail to intervene and the system causes an accident?
- How often does the system take an action which has a high likelihood of having prevented an accident?
- How often does the system do so when it seems likely a human driver would have failed to do so?
- Integrated together, what's the expected dynamics of these probabilities and their net impact insofar as the system being releases more widely creates training data to help improve them more quickly over time?
It could very well turn out to be the case that this system is purely positive, strongly net positive, or neutral in harm reduction. The question then is, given that, what ethics should inform its release: is putting the stress on the driver sufficient if it say, saves 1000 lives in the next six months and will harm noone in exchange, other than some drivers having to intervene and have moments of stress? What if it will save 1000 and 10 people will be harmed by failing to intervene? What if it's an even swap, harming people who fail to intervene in exchange for saving people who would have inevitably been lost to accidents they couldn't have prevented?
Saying "this is wrong to do" based upon anecdotes is dumb analysis. Saying "this is wrong to do" based upon an absolutist form of ethics is fair, but it also means you reject that we should be trying to solve self driving. If you think it's wrong to do and are not doing either, you ought to articulate what scenario in terms of data would justify the action vs not, even if that data is unknown right now.