I'd be interested to see how it made progress somewhere like Sicily (Naples is meant to be even more exciting to drive in but I haven't been there) or Thailand or India.
[Note that I haven't visited Thailand for about 20 years, Sicily for 10 years and India ever so perceptions may be out of date or incorrect based on media/TV coverage for the India case.]
I was in Palermo not all that long ago (2009) and I did wonder how self-drivers could ever navigate the cluster there. In my trips to Seoul, I could see them doing okay on the main roads, but once you get off of those things get really tight and irregular very quick.
Heck if a self driving car can get through some of the old parts of Boston and Philly I think they'd have a chance. Hell, if they can navigate D.C. with all the crazy turns that change depending on the hour of the day they definitely have a chance.
It's an interesting problem that exposes the fundamental difference between humans and algorithms:
While some humans may be able to adopt to driving in Sicily or Thailand after the U.S., algorithms will never be able to do that. Unless, of course, specifically designed for driving in the U.S., Sicily and Thailand. (Or Russia for that matter, where driving is barely subject to any formalization.)
Humans can learn and adapt, it's in our nature, but an algorithm is just a reflection of some narrow aspect of what we, humans, have learned. Change a few external variables and your algorithm fails miserably.
I'm sorry, but somehow I don't believe in the future of self-driving cars, just like I don't believe in algorithms that would do image or voice recognition as well as we do.
Ok, cool. So insurance companies should be lining up to insure autonomous cars, ya? As soon as an insurance company will insure an autonomous car then we will be able to buy them and use them. Or maybe the autonomous car makers will accept risk? This will be great! Cause the only person who wont have to have insurance is - me, the driver! Er... passenger.
Insurance companies won't care if you're driving yourself or using a self-driving car.
They price their premiums so that they always make a profit, regardless of the risk.
That's why drivers that have accidents pay more than drivers with no accidents and young drivers (that are statistically more risky) pay more than not-young drivers.
Insurance companies will simply adjust the cost of insurance for self-driving cars to account for risk based on past data. If those cars will turn out to be involved in less accidents, the insurance will cost less. More accidents => cost more.
Is no one going to point out the fact that the data came from Google themselves? I mean, I'm excited about autonomous cars too, but what happened to taking everything with a pinch of salt?
This is a good point. They compared the braking and acceleration of the vehicles vs. a human driver. A human driver supplied by Google. Nevertheless it's clear this technology has immense near-term promise.
It's also interesting to think about the financial implications for localities as data shows that the average police officer issues speeding tickets for approximately 300k/yr. (http://www.statisticbrain.com/driving-citation-statistics/)
I'd like to know what those autodrivers do for failsafety. What about software crashes? bugs? sensors giving bad readings? intermittent loose connections? chips failing?
They should at least talk to the engineers who design airplane autopilots, who know how to deal with that stuff.
In that same vein but perhaps more subtle, what of the social failsafes? I keep imagining this scenario (one which I run into consistently in the Bay Area), where I am looking to merge into the lane to my left because my lane has ended, and nobody will fucking let me in. Does the automatic driver have a level of aggressiveness that it follows? What if two automated drivers have the same level of aggressiveness and one wants to merge and the other won't let it? Do they just stop, resigning themselves to an endless loop of near catastrophe because neither will yield? I'd love to know more about exactly by what driving conventions these automated drivers abide. I think it'd go a long way to help people understand how such a process could be automated at all.
What do people do when their cars fail or there are drinks behind the wheel? I agree with what you're saying sort of but there are very serious people working this technology (it's not just Google) but there will be accidents but much less than with human drivers. It may seem fantastic but if you actually look at the current state of the technology and even conservative projections by people like the Economist ... It's quite impressive.
How does privacy factor into this? Doubtless these cars would be even easier to track than modern ones. But could they be hackable or overridden by control to be driven someplace?
The code behind this will have to be completely open sourced to allay all fears.
I don't doubt for one second someone in the US government doesn't already have plans to backdoor, take over and turn an autonomous vehicle infrastructure against people if they deem it necessary for "the greater good." Imagine one day your car decides you need to drive out into a facility out in the desert for "questioning..." or suddenly a group of them get commandeered as an ad-hoc roadblock, or any number of ridiculous but maybe too-tempting to resist possibilities, which people couldn't escape should these things become a dominant form of transportation.
I did not get any new information out of this article. Not only does the title basically tell you everything that the article is going to talk about, but the statement is obvious. Of course an autonomous driver is safer than a human.
I think this was the main thing that hasn't been shown with data before: "One of those analyses showed that when a human was behind the wheel, Google’s cars accelerated and braked significantly more sharply than they did when piloting themselves. Another showed that the cars’ software was much better at maintaining a safe distance from the vehicle ahead than the human drivers were."
They were known to be very safe with a very low (almost non-existant) accident rate, but I believe this was the first presentation of some of the more detailed driving data such as acceleration and braking.
There are real legal questions on what happens when robotic cars are really ready. For instance, what happens IF the vehicle actually does have an accident? Who is liable? Is it the "driver" who wasn't actually controlling the car, or the company who wrote the driving software? There will be several hundred ambulance chasers ready to sign up victims of any accidents in order to get a class action suit against Google.
While that's true, it's not as big a deal as you'd think if these cars are dramatically safer. Auto insurance is a 180 billion dollar industry in the US (that's a billion with a B). If the cars are actually safer, they will have dramatically lower insurance premiums, which will allow the manufacturer to charge more money for the car upfront and still come out ahead even if they assume lots of liability. People will find a way to make it work because there's just too much money to be made.
There are very serious people working on this. The reality is there will be fewer accidents. Some people think think the manufacturers themselves will take on the insurance liability, eventually "disrupting" the auto insurance industry completely. Sure, there will be media hay made out if any accidents in the beginning. It's like anything. It will be hard to deny the safety of this mode of transportation. Actually, the most unsafe aspect about this will be combining human drivers on the road with efficient autonomous driving systems.
> For instance, what happens IF the vehicle actually does have an accident? Who is liable? Is it the "driver" who wasn't actually controlling the car, or the company who wrote the driving software?
Why would it be exclusive? If a vehicle actually has an accident with a driver, and owner who is not a driver, and a manufacturer, then, depending on the circumstances, it is possible for all three to have some degree of liability. With a robotic vehicle, there's really not that much difference.
"And existing product liability laws make it clear that a car’s manufacturer would be at fault if the car caused a crash, he said. He also said that when the inevitable accidents do occur, the data autonomous cars collect in order to navigate will provide a powerful and accurate picture of exactly who was responsible."
Predictably many people are skeptical. But just stop and think about how many times you've caused an accident or near miss, ignored a road sign, got stuck while merging, etc and ask yourself if the computer would have done a better job. The new buzzword is "zero fatalities" and it's a far too important goal to just dismiss with typical I'm-a-better-driver-than-any-dumb-machine attitude.
Yep, it starts. I happen to really love driving, clean record, etc. In 10 years, it's going to be getting touchy for folks who still prefer driving themselves rather than letting the self-driving cars do it. The freedom will be exchanged for safety, and nobody will blink.
The cost will be prohibitive for most people. And in the USA, the car culture is too ingrained. I don't see autonomous cars and human driven cars on the same roads. I don't think the technology will be ready in ten years anyway.
No it doesn't discuss other self-driving competitors unless you think of humans as competitors. Let's get real, very soon self driving cars will be tremendously safer and more efficient by many factors than human drivers.
[+] [-] josephlord|12 years ago|reply
[Note that I haven't visited Thailand for about 20 years, Sicily for 10 years and India ever so perceptions may be out of date or incorrect based on media/TV coverage for the India case.]
[+] [-] bane|12 years ago|reply
Heck if a self driving car can get through some of the old parts of Boston and Philly I think they'd have a chance. Hell, if they can navigate D.C. with all the crazy turns that change depending on the hour of the day they definitely have a chance.
[+] [-] mojuba|12 years ago|reply
While some humans may be able to adopt to driving in Sicily or Thailand after the U.S., algorithms will never be able to do that. Unless, of course, specifically designed for driving in the U.S., Sicily and Thailand. (Or Russia for that matter, where driving is barely subject to any formalization.)
Humans can learn and adapt, it's in our nature, but an algorithm is just a reflection of some narrow aspect of what we, humans, have learned. Change a few external variables and your algorithm fails miserably.
I'm sorry, but somehow I don't believe in the future of self-driving cars, just like I don't believe in algorithms that would do image or voice recognition as well as we do.
[+] [-] siculars|12 years ago|reply
[+] [-] kkowalczyk|12 years ago|reply
They price their premiums so that they always make a profit, regardless of the risk.
That's why drivers that have accidents pay more than drivers with no accidents and young drivers (that are statistically more risky) pay more than not-young drivers.
Insurance companies will simply adjust the cost of insurance for self-driving cars to account for risk based on past data. If those cars will turn out to be involved in less accidents, the insurance will cost less. More accidents => cost more.
[+] [-] yen223|12 years ago|reply
[+] [-] grej|12 years ago|reply
[+] [-] grej|12 years ago|reply
[+] [-] WalterBright|12 years ago|reply
They should at least talk to the engineers who design airplane autopilots, who know how to deal with that stuff.
[+] [-] dclowd9901|12 years ago|reply
[+] [-] dinkumthinkum|12 years ago|reply
[+] [-] GuiA|12 years ago|reply
What leads you to think they don't?
[+] [-] Maxious|12 years ago|reply
[+] [-] Apocryphon|12 years ago|reply
The code behind this will have to be completely open sourced to allay all fears.
[+] [-] krapp|12 years ago|reply
[+] [-] cheesylard|12 years ago|reply
[+] [-] grej|12 years ago|reply
They were known to be very safe with a very low (almost non-existant) accident rate, but I believe this was the first presentation of some of the more detailed driving data such as acceleration and braking.
[+] [-] grej|12 years ago|reply
[+] [-] whateverfor|12 years ago|reply
[+] [-] kiba|12 years ago|reply
[+] [-] dinkumthinkum|12 years ago|reply
[+] [-] dragonwriter|12 years ago|reply
Why would it be exclusive? If a vehicle actually has an accident with a driver, and owner who is not a driver, and a manufacturer, then, depending on the circumstances, it is possible for all three to have some degree of liability. With a robotic vehicle, there's really not that much difference.
[+] [-] adrianbg|12 years ago|reply
[+] [-] baddox|12 years ago|reply
[+] [-] gfodor|12 years ago|reply
[+] [-] itchitawa|12 years ago|reply
[+] [-] coryrc|12 years ago|reply
[+] [-] dinkumthinkum|12 years ago|reply
[+] [-] raldi|12 years ago|reply
[+] [-] paulyg|12 years ago|reply
[+] [-] austingunter|12 years ago|reply
[+] [-] woofyman|12 years ago|reply
[+] [-] nawitus|12 years ago|reply
[+] [-] dinkumthinkum|12 years ago|reply
[+] [-] lucian1900|12 years ago|reply
[+] [-] jmulho|12 years ago|reply
[+] [-] samstave|12 years ago|reply
WHen will they come out?