top | item 34608992

(no title)

wwqrd | 3 years ago

Totally agree. If human drivers make more mistakes per km than autopilot then what sense does it make to stop self-driving cars?

discuss

order

Ekaros|3 years ago

I think good question is that are the kilometers driven same?

That is is the self-driven same set as human driven? Without any exclusions. And if not are the ones exluded and including accidents included in self-driving?

LatteLazy|3 years ago

A lot of people (including replies to my root comment) seem to take a puritanical rather than a pragmatic view: if a self driving car makes any mistakes at all, that's too many. Even if the human driver it replaces is much worse and will make many more (and more serious) errors should they not be replaced...

notafraudster|3 years ago

Humans are generally viewed to have an inherent right -- or at least a strong imperative -- to fully participate in society. Restricting a human's right or privilege to do something typically carries a higher burden than simply asking whether the thing they are doing is a net positive/negative. Over the last 100 years the layout of cities and countries reflects an expectation that full participation in society requires access to transportation. While driving is not an absolute right and we subject drivers to certain minimum requirements (licensing and sometimes periodic relicensing, insurance, various laws that apply to conduct in a motor vehicle), the presumption is basically to create a pathway that allows people to drive.

Driving creates a danger and a number of costly externalities. Those are the costs side of the cost-benefit equation. It may be the case that AI drivers have generally lower costs because of your assertion that they make fewer mistakes or less grave mistakes than humans. This does not lead to the obvious inference that allowing an AI driver today in a particular place under a particular regime of rules confers a benefit equal to allowing a human to drive.

In the same way, for instance, we do not generally pass laws or policies against people having children, even if we recognize that some people are suboptimal or unfit parents. Instead we presume fitness and build in some checks to capture excessively unfit parenting later. Whatever the cost of suboptimal parenting, we recognize the choice or ability to have a child to be a part of basic human dignity, and so the benefits outweigh the costs. Many jurisdictions place more onerous requirements on pet ownership than human ownership even if the downside cost of being a negligent pet owner is less bad than being a negligent parent.

Perhaps the calculus will be different when AI driving is costless and ubiquitous; we might decide collectively that humans have no inherent right to manually operate vehicles and full participation in society does not require them to. But in the mean time, holding AI driver street tests to a higher standard than human drivers can be justified strictly on a cost-benefit (and thus pragmatic) basis.

You might counter that, well, AI driving tests don't provide a benefit now, but given a particular utility function, the testing offers <x> marginal training value towards a future reduction in costs (in terms of injury/delay/death). But then it depends on your discount factor of the present versus the future, which is a socially determined function.

There's also an underlying presumption that there's some elasticity of resource allocation. Perhaps allocating $x million towards AI driving reduces deaths or injuries or delays; but people can readily contest that $x million spent on other options could do so more efficiently (perhaps infrastructural investment in public transit, perhaps additional free driver's training, perhaps more robust emergency services, perhaps safety features). Because much of this investment is occurring in the private sector, people rely on the state's regulation of these efforts to incentivize allocating resources towards preferred investments. This is also pragmatic, rather than puritanical.

It's fine to disagree or have a more bullish view of AI driving, or to feel these concerns are overrated, but it makes very little sense to characterize them as puritanical (in contrast to pragmatic). Some people may have puritanical views, just as some proponents of the technology may have messianic views. But both positions can be justified entirely from rational argument depending on one's utility function. Why strawman?

You are a really uncharitable poster here. Your first intervention is to dismiss anyone who disagree with you about which metrics are most useful as "bullshit" and "emotional" and here you call them puritans. I think you should engage in some self-reflection and try to approach conversation online in a less antisocial way.