A pick-up truck being towed crooked and backwards. Both vehicles failed to read the situation in the same manner.
Autonomous vehicles have various redundant systems built-in that can take priority and override false positives.
I was previously under the assumption that one of the really important reasons for Lidar is that it can get you closer to an absolute truth about whether something is a solid object, and where that hypothetically solid object is relative to the position of the vehicle, regardless of what the classifier thinks it is seeing.
So did the lidar fail to read the solid object, or was the lidar, was it de-prioritized? or was it simply not available as a fallback?
Presumably Radar and proximity sensors were also involved. What were they doing?
This is a fascinating edge case, and I hope to hear about the real reason for the 2 incidents.
The article says it: “We determined that due to the persistent orientation mismatch of the towed pickup truck and tow truck combination, the Waymo AV incorrectly predicted the future motion of the towed vehicle.”
It was detected, but it predicted the truck would move in a way that it didn’t end up moving.
One of the best things I've learnt recently is how to apply the zero blame, process improvement approach that (many) air safety regulators take to my own teams.
I'd sat through 'five whys' style postmortems before, but it was reading air safety investigation reports that finally got me to understand it and make it a useful part of how we get better at our jobs.
By comparison, the way we're investigating and responding to self-driving safety incidents still seems very primitive. Why is that?
One difference with this situation in terms of the public perception/discussion though is that, say in the 1960s, air safety wasn't very good compared to today, but still there was no question of eliminating air travel altogether due to safety issues. Today there is definitely an anti-self-driving contingent that would like to hype up every accident to get the self driving companies shut down entirely.
Another comparison with air safety, the disaster risk threshold is high enough to ground vehicles with suspected faults or flaws.
In this case two self-driving cars crashed into another road vehicle because they failed to recognise (in time) which direction it was moving. Waymo should be commended for having voluntarily issued a software recall, but this problem is severe enough that the decision shouldn't really be up to Waymo's good judgement.
Sounds like they were relying solely on their neural network path prediction, which failed when the truck was dragged at an odd angle.
A simple lidar moving object segmentation, which doesn't even know what it's looking at but can always spit out reasonable path predictions, would probably have saved them.
I think Mobileye is doing something like this, but they release so little data, which is always full of marketing bullshit, that it is hard to know what exactly they are working on.
It's unlikely to be neutral network based. This sounds like a model prediction failure. You take a mathematical model of car motion: the rear wheels generally don't steer. The front steered wheels can cause the car to drive along an arc. If you want to predict the arc that will be driven, you take the initial starting heading of the vehicle and project forward in time based on your understanding of the vehicles steering angle. For most "driving in lane at velocity" cases, you generally would assume that the vehicle has very little steering angle input.
We're now getting to see where autonomy needs to develop "spider sense": the scene in front of me feels wrong because some element isn't following the expected behavior maybe in ways that can't really be rationalized about, so we'll become much more conservative/defensive when dealing with it.
I wish there was a picture of the strange towing configuration. I wonder if I would be confused as well, although my guess is that I’d read the situation correctly
This is what people don't appreciate when quoting those statistics about how self-driving cars are safer than humans: when a human driver causes an accident, it was because that particular person did something wrong. When a self-driving car handles a situation wrongly that's a big issue, because all the self-driving cars run the same software.
On the other hand when a human driver causes an accident one driver learns a lesson (maybe). When a self driving car causes an accident all cars get to learn from it.
This is my biggest fear with self-driving cars. Correlated failures. As a society we are extremely good at dealing with independent accidents. We can calculate very precisely how many people will die of traffic in a given year and we can account for it, we can have insurances, and we can decide exactly how much we are willing to spend to save a life on the margin.
But if everything is fine, everything is fine, everything is fine, and then all hell breaks lose? We are not as good at dealing with that.
My fear is similar, but more along the lines of adversarial attacks as various weaknesses are exposed. Imagine people taking advantage of zero-day exploits that cause self driving cars to veer off the road, stop suddenly, collide, etc. It is really not that far fetched. This technology is very far away from maturity, IMO.
Yeah, there's nothing akin to a software update that would cause the entire fleet of human drivers to start driving badly or unexpectedly all at once.
We also know how to hold individuals accountable for independent accidents. We know we won't get justice when people will inevitably start to get killed by standard corporate greed, incompetence, enshittification.
Do we have a picture of the truck? I'm having difficulty imagining it given that surely the tow truck would want the towed vehicle in-line to make driving go smoothly?
The towed vehicle has its rear driven wheels up on a tow hook type of tow truck and It sounds like a locked steering wheel that has been turned. This would lead to the angled tracking of the front wheels.
This would be common for a debt recovery or when a city impounded the vehicle where it's taken without cooperation of the owner.
[Recycled from a older submission] Well, I feel kinda vindicated by this news, after previously noting:
> People worry that ways and times [self-driving cars] are unsafe (separate from overall rates) will be unusual, less-predictable, or involve a novel risk-profile.
In this example, having a secretly cursed vehicle configuration is something that we don't normally think of as a risk-factor from human drivers.
_______
As an exaggerated thought experiment, imagine that autonomous driving achieves a miraculous reduction in overall accident/injury rate down to just 10% of when humans were in charge... However of the accidents that still happen, half are spooky events where every car on the road targets the same victim for no discernible reason.
From the perspective of short-term utilitarianism, an unqualified success, but it's easy to see why it would be a cause of concern that could block adoption.
Fricken|2 years ago
Autonomous vehicles have various redundant systems built-in that can take priority and override false positives.
I was previously under the assumption that one of the really important reasons for Lidar is that it can get you closer to an absolute truth about whether something is a solid object, and where that hypothetically solid object is relative to the position of the vehicle, regardless of what the classifier thinks it is seeing.
So did the lidar fail to read the solid object, or was the lidar, was it de-prioritized? or was it simply not available as a fallback?
Presumably Radar and proximity sensors were also involved. What were they doing?
This is a fascinating edge case, and I hope to hear about the real reason for the 2 incidents.
Mageek|2 years ago
ralph84|2 years ago
https://waymo.com/blog/2024/02/voluntary-recall-of-our-previ...
carbonatedmilk|2 years ago
I'd sat through 'five whys' style postmortems before, but it was reading air safety investigation reports that finally got me to understand it and make it a useful part of how we get better at our jobs.
By comparison, the way we're investigating and responding to self-driving safety incidents still seems very primitive. Why is that?
cameldrv|2 years ago
One difference with this situation in terms of the public perception/discussion though is that, say in the 1960s, air safety wasn't very good compared to today, but still there was no question of eliminating air travel altogether due to safety issues. Today there is definitely an anti-self-driving contingent that would like to hype up every accident to get the self driving companies shut down entirely.
extua|2 years ago
In this case two self-driving cars crashed into another road vehicle because they failed to recognise (in time) which direction it was moving. Waymo should be commended for having voluntarily issued a software recall, but this problem is severe enough that the decision shouldn't really be up to Waymo's good judgement.
esafak|2 years ago
Klaus23|2 years ago
A simple lidar moving object segmentation, which doesn't even know what it's looking at but can always spit out reasonable path predictions, would probably have saved them.
I think Mobileye is doing something like this, but they release so little data, which is always full of marketing bullshit, that it is hard to know what exactly they are working on.
digitallis42|2 years ago
We're now getting to see where autonomy needs to develop "spider sense": the scene in front of me feels wrong because some element isn't following the expected behavior maybe in ways that can't really be rationalized about, so we'll become much more conservative/defensive when dealing with it.
pinkmuffinere|2 years ago
midasuni|2 years ago
adrians1|2 years ago
83|2 years ago
skybrian|2 years ago
im3w1l|2 years ago
But if everything is fine, everything is fine, everything is fine, and then all hell breaks lose? We are not as good at dealing with that.
brk|2 years ago
nicbou|2 years ago
When I make a dumb mistake, the other drivers rarely learn from it.
leoh|2 years ago
add-sub-mul-div|2 years ago
We also know how to hold individuals accountable for independent accidents. We know we won't get justice when people will inevitably start to get killed by standard corporate greed, incompetence, enshittification.
rkagerer|2 years ago
andreareina|2 years ago
rainbowzootsuit|2 years ago
This would be common for a debt recovery or when a city impounded the vehicle where it's taken without cooperation of the owner.
https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2F...
Terr_|2 years ago
> People worry that ways and times [self-driving cars] are unsafe (separate from overall rates) will be unusual, less-predictable, or involve a novel risk-profile.
In this example, having a secretly cursed vehicle configuration is something that we don't normally think of as a risk-factor from human drivers.
_______
As an exaggerated thought experiment, imagine that autonomous driving achieves a miraculous reduction in overall accident/injury rate down to just 10% of when humans were in charge... However of the accidents that still happen, half are spooky events where every car on the road targets the same victim for no discernible reason.
From the perspective of short-term utilitarianism, an unqualified success, but it's easy to see why it would be a cause of concern that could block adoption.
unknown|2 years ago
[deleted]
camel_gopher|2 years ago