Am I the only one feeling the press + big tech narrative of "driverless vehicles = safety" is way ahead of itself? OK, "driver error is blamed for 94% of crashes", but what is the validation strategy to show where driverless cars can quantifiably improve? In 2016, Philip Koopman of CMU discusses that there are non-trivial engineering challenges of achieving safety in NHTSA Level 4 vehicle automation: https://users.ece.cmu.edu/~koopman/pubs/koopman16_sae_autono...
If we are building the safest transportation system, what role do driverless vehicles play? Wouldn't that be the narrative that actually saves the most lives?
Imagine it's April 2018 and Waymo releases this statement:
"In the past year our autonomous cars have driven a total of 50 million miles in real-world conditions. Over that period there were 4 collisions, plus an estimated 12 which would have occurred if our trained staff in the drivers seat had not intervened. Insurance analysts estimate that if human drivers had driven under similar conditions there would have been 51 collisions."
Would that be sufficient for you to support the introduction of Level 4 automation?
It is. We've seen some spectacular self-driving vehicle failures in the past while. Teslas running into half-opened garage doors, vehicles pulled over on the shoulder, the side of a semi, navigating roundablouts by driving into the ditch, driving on the wrong side of the road, yelling at the driver for staying in their lane...
No you are not! HN seems to think the same way, and it is one of the worst cases of rose colored glasses I have ever witnessed. What about the cost? Who is going to be able to afford this? Also, what about people that rely on a vehicle for work? Retrofit $75K work trucks with another $75K in self driving gizmos? Find me a practical plumber, carpenter, A/C, lock smith, self-employed guy/gal that will sign up for that!
Edit to add... Have the crazy expensive and downright hostile self-driving John Deere tractors eliminated farming deaths?
If improving safety is the goal then the driverless vehicle companies are going at it backwards. Instead of having the computer drive the car most of the time and telling the human to take over in challenging situations they should have the human drive all of the time with the computer constantly monitoring and occasionally overriding the human's control inputs when necessary. Some mainstream vehicles already do this to a limited extent by controlling throttle, brakes, and even steering to prevent crashes. Let's keep expanding on that foundation; it's far more achievable than a true autonomous vehicle and would save more lives in the medium term.
There is a successful precedent in aviation. The latest fly-by-wire flight control systems treat the pilot's inputs as merely suggestions and will modify them as necessary to prevent departure from controlled flight, midair collisions, flight into terrain, and overstressing the airframe.
> In 2016, Philip Koopman of CMU discusses that there are non-trivial engineering challenges of achieving safety in NHTSA Level 4 vehicle automation:
That seems like the most obvious statement I've read in the last year or so at least. I think the fact that Alphabet spent more than 10 years developing these self-driving cars already is a pretty strong indicator that there are non-trivial engineering challenges involved.
What is crucial is data. I am not talking about training road condition, Google has been doing this for years. What is missing is traffic data. I have given this argument before on self-driving. One night a maintenance took place on a busy intersection in Flushing, NY. Police officers were asked to guide traffic. This is a planned event, but driveless car does not know; it doesn't have this data until the car shows up at the scene.
Yes, map services like Google have been receiving some data from government in addition to collecting user's current location and user's generous feedback (e.g. Waze user) to determine best route. But we are no where near the condition we can know what's happening ahead of us. What about weather condition? On intersection who goes first?
A safe driverless vehicles should be able to communicate (check, for as long as the communication is stable), and government and cars will share feedback to other cars. This is a crowd-sourcing effort to make fully autonomous car possible on road. If we just learn as we go on the road, these driverless cars will not work well in very complex road condition.
Waaay ahead of itself. There are just some people who see a catastrophe around ever corner, despite all the progess we've made in the last century (so called, "catastrophists"). At the same time, wild optimism about how we're going to have fully-autonomous, Level 5 vehicles doing our errands in 2030.
The case that I have heard is that the decisions will be taken (or rather have been) by a team of software engineers with problem-fixing methods that are meaningfully more efficient at getting the rate of accidents down than someone promising to themselves they’ll drive sober and well rested “next time”.
In reality, the problems encountered by cars are fairly classic and predictably where computers can be very good at: computer vision, 3d projection and modeling, project patterns, control loops (stearing vs. drifting). The beginning was expectedly bad, but is it not a stretch to imagine those solvable, just like Moore’s law and expecting cheaper electronics with mass-market “makes sense”.
Imagine ransomware infecting your car that gives you 30 minutes to pay or you'll be rammed into a tree at 90mph. I don't care about any contrived and well-funded narratives about how unthinking, unfeeling, driverless cars will somehow save lives. No thanks, I'd rather take my chances continuing to share roads with human drivers who are careful because they fear their own death just as much as I do.
what is the validation strategy to show where driverless cars can quantifiably improve?
One big way in which automated drivers could improve traffic: By not being a jackass. I remember at one point driving down Westheimer, which is four lanes wide and one of the major thoroughfares of Houston, when I had to slow down for someone making a right turn to merge into traffic into a middle lane while their head was buried deep into the passenger side footwell looking through their stuff.
Then, there are the people who feel like they have to tailgate you within 6 feet during rush hour traffic on the highway. Also, the people who won't let you in for some personal justice you can't possibly understand.
It would only take a smallish fraction of cars implementing the "stay between" algorithm that CGP Grey mentions in his video to significantly improve traffic.
I've implemented this algorithm manually. (Much easier to do since I have the instant accelerator response of an electric car.) It does seem to improve traffic flow. Also, jackass tailgaters are sometimes confused by this, and decide to pass.
> what is the validation strategy to show where driverless cars can quantifiably improve?
That seems like an odd question to ask. There are huge reams of traffic safety data collected every year by the NTSB and others. Traffic accidents are one of the top two or three treatable public health issues in the modern world, and they get very significant public funding for their study.
Honestly I think the question has to be the reverse: what is is about "driverless" safety data that makes you think it won't be well-measured by the existing "validation" regime?
I feel like the question is not if "driverless vehicles = safety" but when. So, all other things being equal, the quicker we get to that point, the more lives we can save.
Obviously not all other things are equal, and potentially we could create greater loss of life by attempting to get self-driving cars on the road too quickly. But that is the nature of problem: how do we avoid moving too quickly while recognizing that the current danger that is human drivers should be removed from the equation as quickly as feasible.
Is Waymo clear about what the end goal is? I can't tell if they plan to launch an Uber like service, or a direct-to-consumer lease type service, or just to be suppliers (whole vehicles, components like lidar, software) to companies like Uber. Or maybe they are leaving all of that undecided for now?
I'm not sure this rider trial thing strongly signals any of them. You would want real world end-customer experiences regardless, I would think.
I wonder this about autonomous cars in general from any company. They're cool tech and in Europe, they can really help solve several last leg problems and cut down on car ownership (I really don't think people should be able to own self driving cars. They really need airplane style maintenance for sensors and companies need to work together so all fleets have the latest safety and security updates).
However I don't really like hearing about tax money going into these projects. At least in America, self driving cars won't solve gridlock. It will be 15 ~ 30 years before we can have autonomous car only highways, and even then, self driving cars don't even touch the capacity of a real train based mass transit network. Singapore has had self driving trains for years (it's a much easier problem) and London is automating more of their lines.
Self driving cars are cool tech, but they're not going to solve grid lock or many of the major transport problems we face today:
IMHO, it may be a little short sighted to look at Waymo and self-driving cars by itself, I believe it fits into the broader DL-AI initiatives within Alphabet as yet another DL-AI application. Whether it becomes an end in itself or simply feeds into other DL-AI initiatives is most likely still undefined.
I think they're still early enough in the process that they want to leave their options open. I'm sure they're going to be capturing a lot of information about how the riders, community, and press handle this first step — and use that to determine where they want to go from there.
"We're at the point when it's really important to find how real people, outside the Google environment, will use this technology," said John Krafcik, Waymo's chief executive officer. "Our goal is that they will use this for all their transportation needs."
It seems they are split on whether car ownership or ride-sharing will be more viable:
"Yes, self-driving technology makes sense for ride-sharing," said Krafcik, [...] "It also makes sense for personal car ownership." Transportation to and from transit hubs and logistics also made his list.
Self-driving cars have been in the development phase for so long, I'm really excited to see them start rolling out in these beta tests.
I'm curious what sort of unconsidered edge cases they'll find out in the real world. I'm sure test passengers are much more "disciplined" than real world ones.
I would like to see little leased pod cars which trundle around at the local speed within a certain range, and have the ability to go to a Bus Lane (I presume the bus is dead), and join a Magnetic Track of sort where they go 200 miles an hour.
That would be pretty amazing. Why they are sticking shit all over 2 ton vehicles? If they are solving Fuel and Emissions and Driver, can't they just take the final step of a revolution? I am pretty sure governments would be throwing notes at it.
So now what do we tell our kids? "Don't get into a van even when nobody is driving it." ? :-)
I think it is great that they are getting additional exposure to nominally real world users here. However, I'm not exactly sure what they are learning in user behaviors. Is it "Can we make a less expensive livery service?" or is it "How freaked out do people get in self driving cars?" or is it something else?
I went to a conference last week where there were several talks that were pretty critical of self driving 'hype' given the HLS[1] issues and the ability to inexpensively 'spoof' the AI[2] to see something that isn't actually there (road signs being particularly vulnerable). It left me thinking I might be more optimistic about the technology than I should be.
[1] "Health, Life, Safety" the general basket of things that are super critical to minimizing injury and death.
After seeing the movie Logan, I was curious if the movie was taking a jab at the autonomous vehicle trend by portraying them as a danger to ma & pa drivers on the highway. It showed a crowded highway full of shipping containers pushing a truck hauling horses off the road. It seems the message was not that the technology can't be courteous, but that once it's accepted, corporations will abuse the roads to help their bottom line. It makes me wonder if that is a valid concern.
> but that once it's accepted, corporations will abuse the roads to help their bottom line
Those evil corporations, being all corporate-y!
I haven't seen the movie Logan but the gist is that you think BigCorp will modify their cars so they drive dangerously fast, threatening the other non-AI cars on the road? And they will risk killing people or damaging vehicles in order to improve shipping time? And you think this will be prevalent? There won't be economic, legal, hiring, or social consequences from police, shareholders, politicians, employees, and the AI companies? Because they're big corporations run by greedy CEOs who get away with anything?
By all accounts, autonomous vehicles will be much safer than human drivers. Companies will be sued up the ass if they program their vehicles to take risks and end up killing people.
While that came about as a form of corporate abuse, the scene in the movie was less about corporations abusing public roads to improve bottom line, and more about a specific corporation trying to kill a specific farmer to take his land!
CTRL+F for "So they were on highways today on
those trucks"
The whole scene is about Big Corn trying to take a specific plot of land whose owner would not sell.
In that situation, it's less about the dangers of autonomous vehicles, and more about corporations are willing to murder others. The next scene involves guns and fighting.
Of course it is valid. I doubt that any major company will countenance literally forcing people off the road but they will probably lobby have the law do it for them.
My fear is MORE urban sprawl, which is even MORE miles of highways to pave and care for, and MORE pollution in the short term until we get very eco friendly powered electric cars.
Commutes are one of the only things that seem to limit urban sprawl.
Transportation in general will become much cheaper. Because driverless cars will get into far fewer accidents than humans the cost of insuring them will be a fraction of what it costs to insure a human driver today. And when transportation gets cheaper basically every product that needs to be moved gets cheaper (assuming the companies using this technology don't just pocket the savings).
Also, basically drivers of all kinds will no longer have jobs.
This is an important step for testing, and I'm surprised that this is being spun as a move towards commercialization.
As of right now, waymo has only been doing what could metaphorically be called unit testing. That is, they test the cars behavior in very controlled but unrealistic environments, looking for very specific responses. The accident rate that they've incurred is likely ridiculously skewed: they've been driving in good weather, on meaningless routes (not chosen by destination, but by route features), at relatively safe times of day, at slow speeds, and they've been doing it extremely cautiously with engineers ready to take over in a moments notice.
This is exactly what they should have been doing, but politically it is misleading. Most human drivers, given those same constraints, would also do extremely well and way better than average. They've done well, but we have little basis for comparison with the average driver.
This is the first step towards integration testing. They get to see how the cars behavior integrates together across various scenarios that are much closer to real life. They are driving on actual routes that real people travel on...routes that aren't chosen in order to test a specific behavior.
Accident rates are going to go up. That's a good thing...its a move towards the things humans find more difficult too. We should, however, expect slowing progress to level 4 autonomy. This is typical of system capability growth; exponential in the beginning, asymptotic near the end. People that are rushing this are out of line; akin to immediate commercializations of lab rat successes. Give them time.
I dream with the day that humans riding vehicles will be forbidden by law unless the driver would prove the need for human driving. So many deaths would be avoided.
I find it smart that Waymo waited until the snowbirds left to start this. It will be comical when the blue hairs come back next winter and smoke a few of these self-driving cars.... or vice versa. Obviously I don't want anyone to get hurt, but... it is going to happen.
Google's car clearly isn't going to work in the real world (can it follow detour signs? Get out of the way of emergency vehicles? Understand a cop directing traffic?). Why is Google pretending otherwise? Why are they doing this?
Google's car can already read the hand signals from bicyclists[0]. They have logged literally millions of miles of safe driving. They really aren't "pretending".
However, there are different classifications[1] of autonomous vehicles. Google's car is currently somewhere around level 3. It's true nobody has a level 4 car yet.
[+] [-] jonmc12|8 years ago|reply
If we are building the safest transportation system, what role do driverless vehicles play? Wouldn't that be the narrative that actually saves the most lives?
[+] [-] smlacy|8 years ago|reply
There are a host of obvious reasons why even a "simple" L4 car will beat a human hands down:
* Reaction time.
* 360 awareness / visibility.
* Lack of exaggerated "Human Reflex" to surprising events (i.e. swerving violently to avoid a dog, and hitting other vehicles)
* Keeping perfect space around the vehicle for safe stopping at all times. (don't tailgate, don't get rear-ended)
* Assuming a LIDAR-based system, virtually no difference in day vs. night vision. No "sun in your eyes" or "road glare".
* No fatigue related accidents.
* No DUI and related prescription drug accidents.
* No distraction-based accidents. (kids, cellphone, food)
* No "road rage" based accidents.
I could go on, but this should drive the point home pretty clearly, IMHO.
[+] [-] MarkMc|8 years ago|reply
"In the past year our autonomous cars have driven a total of 50 million miles in real-world conditions. Over that period there were 4 collisions, plus an estimated 12 which would have occurred if our trained staff in the drivers seat had not intervened. Insurance analysts estimate that if human drivers had driven under similar conditions there would have been 51 collisions."
Would that be sufficient for you to support the introduction of Level 4 automation?
[+] [-] vkou|8 years ago|reply
[+] [-] monkmartinez|8 years ago|reply
Edit to add... Have the crazy expensive and downright hostile self-driving John Deere tractors eliminated farming deaths?
[+] [-] nradov|8 years ago|reply
http://www.thedrive.com/tech/9548/the-biggest-opportunity-ev...
There is a successful precedent in aviation. The latest fly-by-wire flight control systems treat the pilot's inputs as merely suggestions and will modify them as necessary to prevent departure from controlled flight, midair collisions, flight into terrain, and overstressing the airframe.
[+] [-] Merovius|8 years ago|reply
That seems like the most obvious statement I've read in the last year or so at least. I think the fact that Alphabet spent more than 10 years developing these self-driving cars already is a pretty strong indicator that there are non-trivial engineering challenges involved.
[+] [-] yeukhon|8 years ago|reply
Yes, map services like Google have been receiving some data from government in addition to collecting user's current location and user's generous feedback (e.g. Waze user) to determine best route. But we are no where near the condition we can know what's happening ahead of us. What about weather condition? On intersection who goes first?
A safe driverless vehicles should be able to communicate (check, for as long as the communication is stable), and government and cars will share feedback to other cars. This is a crowd-sourcing effort to make fully autonomous car possible on road. If we just learn as we go on the road, these driverless cars will not work well in very complex road condition.
[+] [-] narrowrail|8 years ago|reply
[+] [-] bertil|8 years ago|reply
In reality, the problems encountered by cars are fairly classic and predictably where computers can be very good at: computer vision, 3d projection and modeling, project patterns, control loops (stearing vs. drifting). The beginning was expectedly bad, but is it not a stretch to imagine those solvable, just like Moore’s law and expecting cheaper electronics with mass-market “makes sense”.
[+] [-] stevenh|8 years ago|reply
[+] [-] stcredzero|8 years ago|reply
One big way in which automated drivers could improve traffic: By not being a jackass. I remember at one point driving down Westheimer, which is four lanes wide and one of the major thoroughfares of Houston, when I had to slow down for someone making a right turn to merge into traffic into a middle lane while their head was buried deep into the passenger side footwell looking through their stuff.
Then, there are the people who feel like they have to tailgate you within 6 feet during rush hour traffic on the highway. Also, the people who won't let you in for some personal justice you can't possibly understand.
It would only take a smallish fraction of cars implementing the "stay between" algorithm that CGP Grey mentions in his video to significantly improve traffic.
https://www.youtube.com/watch?v=iHzzSao6ypE
I've implemented this algorithm manually. (Much easier to do since I have the instant accelerator response of an electric car.) It does seem to improve traffic flow. Also, jackass tailgaters are sometimes confused by this, and decide to pass.
[+] [-] ajross|8 years ago|reply
That seems like an odd question to ask. There are huge reams of traffic safety data collected every year by the NTSB and others. Traffic accidents are one of the top two or three treatable public health issues in the modern world, and they get very significant public funding for their study.
Honestly I think the question has to be the reverse: what is is about "driverless" safety data that makes you think it won't be well-measured by the existing "validation" regime?
[+] [-] regulation_d|8 years ago|reply
Obviously not all other things are equal, and potentially we could create greater loss of life by attempting to get self-driving cars on the road too quickly. But that is the nature of problem: how do we avoid moving too quickly while recognizing that the current danger that is human drivers should be removed from the equation as quickly as feasible.
[+] [-] tyingq|8 years ago|reply
I'm not sure this rider trial thing strongly signals any of them. You would want real world end-customer experiences regardless, I would think.
[+] [-] djsumdog|8 years ago|reply
However I don't really like hearing about tax money going into these projects. At least in America, self driving cars won't solve gridlock. It will be 15 ~ 30 years before we can have autonomous car only highways, and even then, self driving cars don't even touch the capacity of a real train based mass transit network. Singapore has had self driving trains for years (it's a much easier problem) and London is automating more of their lines.
Self driving cars are cool tech, but they're not going to solve grid lock or many of the major transport problems we face today:
http://penguindreams.org/blog/self-driving-cars-will-not-sol...
[+] [-] yumraj|8 years ago|reply
IMHO, it may be a little short sighted to look at Waymo and self-driving cars by itself, I believe it fits into the broader DL-AI initiatives within Alphabet as yet another DL-AI application. Whether it becomes an end in itself or simply feeds into other DL-AI initiatives is most likely still undefined.
[+] [-] harpastum|8 years ago|reply
[+] [-] chejazi|8 years ago|reply
I think they're clear:
"We're at the point when it's really important to find how real people, outside the Google environment, will use this technology," said John Krafcik, Waymo's chief executive officer. "Our goal is that they will use this for all their transportation needs."
It seems they are split on whether car ownership or ride-sharing will be more viable:
"Yes, self-driving technology makes sense for ride-sharing," said Krafcik, [...] "It also makes sense for personal car ownership." Transportation to and from transit hubs and logistics also made his list.
[+] [-] KKKKkkkk1|8 years ago|reply
[+] [-] kyrra|8 years ago|reply
* People can apply to use/borrow a Waymo vehicle (RX450 or Pacifica) for some period of time.
* "as part of this early trial, there will be a test driver in each vehicle monitoring the rides at all times."
* Limited to Phoenix metro area for the time being.
* You apply here: https://waymo.com/apply/
* Waymo will be adding 500 more of the Pacifica minivans for this program.
[+] [-] jamesroseman|8 years ago|reply
I'm curious what sort of unconsidered edge cases they'll find out in the real world. I'm sure test passengers are much more "disciplined" than real world ones.
[+] [-] Fifer82|8 years ago|reply
That would be pretty amazing. Why they are sticking shit all over 2 ton vehicles? If they are solving Fuel and Emissions and Driver, can't they just take the final step of a revolution? I am pretty sure governments would be throwing notes at it.
[+] [-] ChuckMcM|8 years ago|reply
I think it is great that they are getting additional exposure to nominally real world users here. However, I'm not exactly sure what they are learning in user behaviors. Is it "Can we make a less expensive livery service?" or is it "How freaked out do people get in self driving cars?" or is it something else?
I went to a conference last week where there were several talks that were pretty critical of self driving 'hype' given the HLS[1] issues and the ability to inexpensively 'spoof' the AI[2] to see something that isn't actually there (road signs being particularly vulnerable). It left me thinking I might be more optimistic about the technology than I should be.
[1] "Health, Life, Safety" the general basket of things that are super critical to minimizing injury and death.
[2] https://www.theverge.com/2016/11/3/13507542/facial-recogniti... -- on Facial Recognition but sign recognition has the same problem.
[+] [-] Pigo|8 years ago|reply
[+] [-] dmix|8 years ago|reply
Those evil corporations, being all corporate-y!
I haven't seen the movie Logan but the gist is that you think BigCorp will modify their cars so they drive dangerously fast, threatening the other non-AI cars on the road? And they will risk killing people or damaging vehicles in order to improve shipping time? And you think this will be prevalent? There won't be economic, legal, hiring, or social consequences from police, shareholders, politicians, employees, and the AI companies? Because they're big corporations run by greedy CEOs who get away with anything?
[+] [-] hart_russell|8 years ago|reply
[+] [-] churn1|8 years ago|reply
Source: http://www.springfieldspringfield.co.uk/movie_script.php?mov...
CTRL+F for "So they were on highways today on those trucks"
The whole scene is about Big Corn trying to take a specific plot of land whose owner would not sell.
In that situation, it's less about the dangers of autonomous vehicles, and more about corporations are willing to murder others. The next scene involves guns and fighting.
[+] [-] kwhitefoot|8 years ago|reply
[+] [-] Kiro|8 years ago|reply
[+] [-] brianwawok|8 years ago|reply
Commutes are one of the only things that seem to limit urban sprawl.
[+] [-] AndrewDucker|8 years ago|reply
[+] [-] abvdasker|8 years ago|reply
Also, basically drivers of all kinds will no longer have jobs.
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] brlewis|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] coldpizza|8 years ago|reply
[+] [-] trapperkeeper79|8 years ago|reply
[+] [-] scarmig|8 years ago|reply
[+] [-] saosebastiao|8 years ago|reply
As of right now, waymo has only been doing what could metaphorically be called unit testing. That is, they test the cars behavior in very controlled but unrealistic environments, looking for very specific responses. The accident rate that they've incurred is likely ridiculously skewed: they've been driving in good weather, on meaningless routes (not chosen by destination, but by route features), at relatively safe times of day, at slow speeds, and they've been doing it extremely cautiously with engineers ready to take over in a moments notice.
This is exactly what they should have been doing, but politically it is misleading. Most human drivers, given those same constraints, would also do extremely well and way better than average. They've done well, but we have little basis for comparison with the average driver.
This is the first step towards integration testing. They get to see how the cars behavior integrates together across various scenarios that are much closer to real life. They are driving on actual routes that real people travel on...routes that aren't chosen in order to test a specific behavior.
Accident rates are going to go up. That's a good thing...its a move towards the things humans find more difficult too. We should, however, expect slowing progress to level 4 autonomy. This is typical of system capability growth; exponential in the beginning, asymptotic near the end. People that are rushing this are out of line; akin to immediate commercializations of lab rat successes. Give them time.
[+] [-] bigato|8 years ago|reply
[+] [-] smpetrey|8 years ago|reply
[+] [-] z3t4|8 years ago|reply
[+] [-] andrewmcwatters|8 years ago|reply
[+] [-] monkmartinez|8 years ago|reply
[+] [-] sctb|8 years ago|reply
https://news.ycombinator.com/newsguidelines.html
[+] [-] jonknee|8 years ago|reply
[+] [-] drcross|8 years ago|reply
[+] [-] chrismealy|8 years ago|reply
[+] [-] teach|8 years ago|reply
However, there are different classifications[1] of autonomous vehicles. Google's car is currently somewhere around level 3. It's true nobody has a level 4 car yet.
[0] https://www.engadget.com/2016/07/05/google-autonomous-cars-c... [1] https://en.wikipedia.org/wiki/Autonomous_car#Classification