top | item 34154416

(no title)

paxunix | 3 years ago

It's not that they necessarily need true reasoning. It would just be extremely beneficial, because that's what every human driver already has built-in (although exercised to varying degrees of success).

What about an axe that falls off the landscaping truck in front of you? Or a mattress? Or a harmless styrofoam cooler? The self-driving car does not know what those things are, but suddenly there is <something> in the air that will likely collide with you. The computer is going to be unable to predict how the mattress or the axe or the styrofoam will fly through the air, it can only decide to take abrupt, evasive action. It is also unaware that the person behind the car has been periodically looking down at their phone instead of watching the road, so an abrupt swerve or stop may still cause an accident. A sensible human driver would realize they are taking on additional risk by following such a truck and/or remaining in front of the distracted driver, and maybe decide to change lanes safely. The car's pretend AI has no idea of any of these things until something falls off and it has to react. We'll praise it when it gets it right--ooooh how ingenious! And the apologists will claim "there's no way it could have made a perfect decision--look how many other times it gets it right!" when it fails. And the realists will conclude "ha, stupid computer, told ya so".

discuss

order

eternityforest|3 years ago

Humans have reasoning, AI has zero aggressive instinct and instant reaction time. It could be a pretty even battle.

They might make really bad decisions in edge cases(Which will get less and less common as more cars get smart), but they might make up for it with perfect behavior in ordinary circumstances.

They will never prioritize convenience or speed or avoiding angering other people over safety. They'll do the safe thing even if no human driver could maintain that level of paranoia at all times.