top | item 28746990

(no title)

3dfan | 4 years ago

By the time people are allowed to let their cars drive unsupervised, the crash rate of AI versus human will probably be 1:10 or so.

So even when those cars ram into fire trucks from time to time, it would be better to let them do their thing. Otherwise people will grab the steering wheel, drive drunk, sleepy, angry etc and ram into all kinds of things again.

Currently, there are 6 million car accidents per year in the USA. Almost 100 people die in car accidents every day. So there is a ton of data to make the decision.

discuss

order

moralestapia|4 years ago

>By the time people are allowed to let their cars drive unsupervised, the crash rate of AI versus human will probably be 1:10 or so.

This sort of statement keeps being parroted over and over again. As Linus would say, talk is cheap, show me the code; then we can speak.

>So even when those cars ram into fire trucks from time to time [...]

This is just insane, honestly, and if this is the premise that guides the development of these sort of systems I'll be glad to never set foot on one.

shkkmo|4 years ago

The entire way we approach driver liability in general is insane.

The Attorney General of South Dakota was looking at his phone, swerved into the shoulder, killed man and then left the scene. He claimed he thought he hit a deer, even though the victim's head went through the windshield and the victims glasses were later found inside the car.

What consequences did the Attorney General face? Was his licence revoked or suspended? Did he serve any jail time? Did he resign? The answer to all of these is "No." The only result was two misdemeanors and a 500 dollar fine.

So yes, accepting occaisonal inhuman errors from a system that is 10x safer (hypothetical, no current systems have this record) than human drivers may also be insane but it would still be far more sane than the current approach to human drivers.

pfortuny|4 years ago

There is no code, that is the problem.

You (or the Transport Administration) do not have access to the training data, the training parameters or anything at all.

It is just a black-box which we are all to trust because "it works, mostly".

In my book, that goes against any notion of admissibility by a government agency.

amelius|4 years ago

Yes anything or anyone outside of the norm becomes a target.

E.g. wear your hair in a funny way that's never been trained on? You're a target!

croon|4 years ago

In aggregate, humans have a lot of failure modes when driving, but it's also difficult to compare aggregate data with specific AI failure modes.

I have been driving for almost two decades with 0 accidents. I'm not saying I can't have a lapse of judgement or do something stupid going forward, but I certainly won't mis-class an object, nor kill myself over it.

I hypothetically want bad drivers to be replaced by AI because it's likely already better. But replacing everyone with AI (at the current generation of AI, which isn't the first, nor the last) will undoubtedly lead to tons of avoidable deaths, and I'm not keen on drawing a lottery ticket for it.

jacobr1|4 years ago

I'm more interested in replacing _other_ drivers, more than myself. Really if we could replace the bottom 10% of drivers with AI, even at the level we have today, I imagine that would be a net improvement. But that isn't really a feasible program. As for future, improved AI, I would trade my own driving for the more efficient and safer system.

kevingadd|4 years ago

People already let their Teslas drive unsupervised, they're just "not supposed to". That will be increasingly permitted over time, either implicitly or explicitly. It's not a switch that will be flipped nationwide once the data hits a threshold.