top | item 41889213

(no title)

frabjoused | 1 year ago

The thing that doesn't make sense is the numbers. If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

When I did the trial on my Tesla, I also noted these kinds of things and felt like I had to take control.

But at the end of the day, only the numbers matter.

discuss

order

timabdulla|1 year ago

> If it is dangerous in your anecdotes, why don't the reported numbers show more accidents when FSD is on?

Even if it is true that the data show that with FSD (not Autopilot) enabled, drivers are in fewer crashes, I would be worried about other confounding factors.

For instance, I would assume that drivers are more likely to engage FSD in situations of lower complexity (less traffic, little construction or other impediments, overall lesser traffic flow control complexity, etc.) I also believe that at least initially, Tesla only released FSD to drivers with high safety scores relative to their total driver base, another obvious confounding factor.

Happy to be proven wrong though if you have a link to a recent study that goes through all of this.

valval|1 year ago

[deleted]

rvnx|1 year ago

There is an easy way to know what is really behind the numbers: look who is paying in case of accident.

You have a Mercedes, Mercedes takes responsibility.

You have a Tesla, you take the responsibility.

Says a lot.

sebzim4500|1 year ago

Mercedes had the insight that if no one is able to actually use the system then it can't cause any crashes.

Technically, that is the easiest way to get a perfect safety record and journalists will seemingly just go along with the charade.

tensor|1 year ago

You have a Mercedes, and you have a system that works virtually nowhere.

diebeforei485|1 year ago

While I don't disagree with your point in general, it should be noted that there is more to taking responsibility than just paying. Even if Mercedes Drive Pilot was enabled, anything that involves court appearances and criminal liability is still your problem if you're in the driver's seat.

_ea1k|1 year ago

Because it is bad enough that people really do supervise it. I see people who say that wouldn't happen because the drivers become complacent.

Maybe that could be a problem with future versions, but I don't see it happening with 12.3.x. I've also heard that driver attention monitoring is pretty good in the later versions, but I have no first hand experience yet.

valval|1 year ago

Very good point. The product that requires supervision and tells the user to keep their hands on the wheel every 10 seconds is not good enough to be used unsupervised.

I wonder how things are inside your head. Are you ignorant or affected by some strong bias?

kelnos|1 year ago

Agree that only the numbers matter, but only if the numbers are comprehensive and useful.

How often does an autonomous driving system get the driver into a dicey situation, but the driver notices the bad behavior, takes control, and avoids a crash? I don't think we have publicly-available data on that at all.

You admit that you ran into some of these sorts of situations during your trial. Those situations are unacceptable. An autonomous driving system should be safer than a human driver, and should not make mistakes that a human driver would not make.

Despite all the YouTube videos out there of people doing unsafe things with Tesla FSD, I expect that most people that use it are pretty responsible, are paying attention, and are ready to take over if they notice FSD doing something wrong. But if people need to do that, it's not a safe, successful autonomous driving system. Safety means everyone can watch TV, mess around on their phone, or even take a nap, and we still end up with a lower crash rate than with human drivers.

The numbers that are available can't tell us if that would be the case. My belief is that we're absolutely not there.

bastawhiz|1 year ago

Is Tesla required to report system failures or the vehicle damaging itself? How do we know they're not optimizing for the benchmark (what they're legally required to report)?

rvnx|1 year ago

If the question is: “was FSD activated at the time of the accident: yes/no”, they can legally claim no, for example if luckily the FSD disconnects half a second before a dangerous situation (eg: glare obstructing cameras), which may coincide exactly with the times of some accidents.

Uzza|1 year ago

All manufacturers have for some time been required by regulators to report any accident where an autonomous or partially autonomous system was active within 30 seconds of an accident.

gamblor956|1 year ago

The numbers collected by the NHTSA and insurance companies do show that FSD is dangerous...that's why the NHTSA started investigating and its why most insurance companies won't insure Tesla vehicles or charge significantly higher rates.

Also, Tesla is known to disable self-driving features right before collisions to give the appearance of driver fault.

And the coup de grace: if Tesla's own data showed that FSD was actually safer, they'd be shouting it from the moon, using that data to get self-driving permits in CA, and offering to assume liability if FSD actually caused an accident (like Mercedes does with its self driving system).

nkrisc|1 year ago

What numbers? Who’s measuring? What are they measuring?

ForHackernews|1 year ago

Maybe other human drivers are reacting quickly and avoiding potential accidents from dangerous computer driving? That would be ironic, but I'm sure it's possible in some situations.

akira2501|1 year ago

You can measure risks without having to witness disaster.

lawn|1 year ago

> The thing that doesn't make sense is the numbers.

Oh? Who are presenting the numbers?

Is a crash that fails to trigger the airbags still not counted as a crash?

What about the car turning off FSD right before a crash?

How about adjusting for factors such as age of driver and the type of miles driven?

The numbers don't make sense because they're not good comparisons and are made to make Tesla look good.

johnneville|1 year ago

are there even transparent reported numbers available ?

for whatever does exist, it is also easy to imagine how they could be misleading. for instance i've disengaged FSD when i noticed i was about to be in an accident. if i couldn't recover in time, the accident would not be when FSD is on and depending on the metric, would not be reported as a FSD induced accident.

kybernetikos|1 year ago

> But at the end of the day, only the numbers matter.

Are these the numbers reported by tesla, or by some third party?

throwaway562if1|1 year ago

AIUI the numbers are for accidents where FSD is in control. Which means if it does a turn into oncoming traffic and the driver yanks the wheel or slams the brakes 500ms before collision, it's not considered a crash during FSD.

Uzza|1 year ago

That is not correct. Tesla counts any accident within 5 seconds of Autopilot/FSD turning off as the system being involved. Regulators extend that period to 30 seconds, and Tesla must comply with that when reporting to them.

concordDance|1 year ago

Several people in this thread have been saying this or similar. It's incorrect, from Tesla:

"To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact"

https://www.tesla.com/en_gb/VehicleSafetyReport

Situations which inevitably cause a crash more than 5 seconds later seem like they would be extremely rare.