(no title)
rifung | 6 years ago
Maybe logically that makes sense but from an ethical perspective I argue it's much more complicated than that (e.g. the trolley problem)
In the current system if a human is at fault, they take the blame for the accident. If we decide to move to self driving cars that we know are far from perfect but statistically better than humans, who do we blame when an accident inevitably happens? Do we blame the manufacturer even though their system is operating within the limits they've advertised?
Or do we just say well, it's better than it used to be and it's no one's fault? When the systems become significantly better than humans, I can see this perhaps being a reasonable argument, but if it's just slightly better, I'm not sure people will be convinced.
craftinator|6 years ago
cameronbrown|6 years ago
erobbins|6 years ago
So far, I have been killed exactly zero times in car crashes. All the empirical evidence tells me that there's no need to surrender control to a computer.
If I die in a crash, perhaps I'll change my mind...
strbean|6 years ago
Are deaths where blame can be placed preferable to deaths where it cannot? By what factor? Should we try to exchange one of the latter for two of the former?