(no title)
tippytippytango | 3 months ago
Only rigorous, continual, third party validation that the system is effective and safe would be relevant. It should be evaluated more like a medical treatment.
This gets especially relevant when it gets into an intermediate regime where it can go 10,000 miles without a catastrophic incident. At that level of reliability you can find lots of people who claim "it's driven me around for 2 years without any problem, what are you complaining about?"
10,000 mile per incident fault rate is actually catastrophic. That means the average driver has a serious, life threatening incident every year at an average driving rate. That would be a public safety crisis.
We run into the problem again in the 100,000 mile per incident range. This is still not safe. Yet, that's reliable enough where you can find many people who can potentially get lucky and live their whole life and not see the system cause a catastrophic incident. Yet, it's still 2-5x worse than the average driver.
irjustin|3 months ago
100% agreed, and I'll take it one step further - level 3 should be outright banned/illegal.
The reason is it allows blame shifting exactly as what is happening right now. Drivers mentally expected level 4 and legally the company will position the fault, in as much as it can get away with, to be on the driver, effectively level 2.
atlex2|3 months ago
simondotau|3 months ago
The human collision numbers only count actual incidents, and even then only ones which have been reported to insurance/authorities. It doesn't include many minor incidents such as hitting a bollard, or curb rash, or bump-and-run incidents in car parks, and even vehicle-on-vehicle incidents when both parties agree to settle privately. And the number certainly excludes ALL unacceptably close near-misses. There's no good numbers for any of these, but I'd be shocked if minor incidents weren't an of magnitude more common, and near misses another order of magnitude again.
Whereas an FSD disengagement could merely represent the driver's (very reasonable) unwillingness to see if the software will avoid the incident itself. Some disengagements don't represent a safety risk at all, such as when the software is being overly cautious, e.g. at a busy crosswalk. Some disengagements for sure were to avoid a bad situation, though many of these would have been non-catastrophic (such as curbing a wheel) and not a collision which would be included in any human driver collision statistics.
omgwtfbyobbq|3 months ago
FSD, what most people use, is ADAS, even if it performs a lot of the driving tasks in many situations, and the driver needs to always be monitoring it, no exceptions.
The same applies to any ADAS. If it doesn't work for in a situation, the driver has to take over.
terminalshort|3 months ago
buran77|3 months ago
I've made this comparison before but student drivers under instructor supervision (with secondary controls) also rarely crash. Are they the best drivers?
I am not a plane pilot but I flew a plane many times while supervised by the pilot. Never took off, never landed, but also never crashed. Am I better than a real pilot or even in any way a competent one?
tippytippytango|3 months ago
I worry that when it gets to 10,000 mile per incident reliability that it's going to be hard to remind myself I need to pay attention. At which point it becomes a de facto unsupervised system and its reliability falls to that of the autonomous system, rather than the reliability of human + autonomy, an enormous gap.
Of course, I could be wrong. Which is why we need some trusted third party validation of these ideas.
IT4MD|3 months ago
[deleted]