top | item 45870684

(no title)

tippytippytango | 3 months ago

Ultimately, anecdotes and testimonials of a product like this are irrelevant. But the public discourse hasn't caught up with it. People talk about it like it's a new game console or app, giving their positive or negative testimonials, as if this is the correct way to validate the product.

Only rigorous, continual, third party validation that the system is effective and safe would be relevant. It should be evaluated more like a medical treatment.

This gets especially relevant when it gets into an intermediate regime where it can go 10,000 miles without a catastrophic incident. At that level of reliability you can find lots of people who claim "it's driven me around for 2 years without any problem, what are you complaining about?"

10,000 mile per incident fault rate is actually catastrophic. That means the average driver has a serious, life threatening incident every year at an average driving rate. That would be a public safety crisis.

We run into the problem again in the 100,000 mile per incident range. This is still not safe. Yet, that's reliable enough where you can find many people who can potentially get lucky and live their whole life and not see the system cause a catastrophic incident. Yet, it's still 2-5x worse than the average driver.

discuss

order

irjustin|3 months ago

> Only rigorous, continual, third party validation that the system is effective and safe would be relevant. It should be evaluated more like a medical treatment.

100% agreed, and I'll take it one step further - level 3 should be outright banned/illegal.

The reason is it allows blame shifting exactly as what is happening right now. Drivers mentally expected level 4 and legally the company will position the fault, in as much as it can get away with, to be on the driver, effectively level 2.

atlex2|3 months ago

They're building on a false premise that human equivalent performance using cameras is acceptable. That's the whole point of AI - when you can think really fast, the world is really slow. You simulate things. Even with lifetimes of data, the cars still will fail in visual scenarios where error bars on ground truth shoot through the roof. Elon seems to believe his cars will fail in similar ways to humans because they use cameras. False premise. As Waymo scales, human just isn't good enough, except for humans.

simondotau|3 months ago

It can be misleading to directly compare disengagements to actual catastrophic incidents.

The human collision numbers only count actual incidents, and even then only ones which have been reported to insurance/authorities. It doesn't include many minor incidents such as hitting a bollard, or curb rash, or bump-and-run incidents in car parks, and even vehicle-on-vehicle incidents when both parties agree to settle privately. And the number certainly excludes ALL unacceptably close near-misses. There's no good numbers for any of these, but I'd be shocked if minor incidents weren't an of magnitude more common, and near misses another order of magnitude again.

Whereas an FSD disengagement could merely represent the driver's (very reasonable) unwillingness to see if the software will avoid the incident itself. Some disengagements don't represent a safety risk at all, such as when the software is being overly cautious, e.g. at a busy crosswalk. Some disengagements for sure were to avoid a bad situation, though many of these would have been non-catastrophic (such as curbing a wheel) and not a collision which would be included in any human driver collision statistics.

omgwtfbyobbq|3 months ago

As a robotaxi, yes. That's why Teslas rollout is relatively small/slow, has safety monitors, etc...

FSD, what most people use, is ADAS, even if it performs a lot of the driving tasks in many situations, and the driver needs to always be monitoring it, no exceptions.

The same applies to any ADAS. If it doesn't work for in a situation, the driver has to take over.

terminalshort|3 months ago

If there was actually a rate of one life threatening accident per 10,000 miles with FSD that would be so obvious it would be impossible to hide. So I have to conclude the cars are actually much safer than that.

buran77|3 months ago

FSD never drives alone. It's always supervised by another driver legally responsible to correct. More importantly we have no independently verified data about the self driving incidents. Quite the opposite, Tesla repeatedly obscured data or impeded investigations.

I've made this comparison before but student drivers under instructor supervision (with secondary controls) also rarely crash. Are they the best drivers?

I am not a plane pilot but I flew a plane many times while supervised by the pilot. Never took off, never landed, but also never crashed. Am I better than a real pilot or even in any way a competent one?

tippytippytango|3 months ago

Above I was talking more generally about full autonomy. I agree the combined human + fsd system can be at least as safe as a human driver, perhaps more, if you have a good driver. As a frequent user of FSD, it's unreliability can be a feature, it constantly reminds me it can't be fully trusted, so I shadow drive and pay full attention. So it's like having a second pair of eyes on the road.

I worry that when it gets to 10,000 mile per incident reliability that it's going to be hard to remind myself I need to pay attention. At which point it becomes a de facto unsupervised system and its reliability falls to that of the autonomous system, rather than the reliability of human + autonomy, an enormous gap.

Of course, I could be wrong. Which is why we need some trusted third party validation of these ideas.

IT4MD|3 months ago

[deleted]