>We did implement checks for what seemed to us as more common failure scenarios, but the devil here was that this one first appeared during the run and we did not cover it at the analytical analysis stage.
Sounds like the kind of stage where you might wonder if analytical uncertainty is being multiplied.
>And this, ladies and gentlemen, is why self-driving is hard.
That's not all, computer programming on its own can be pretty hard to do as outstandingly as average people drive their cars to begin with.
And that's just average and that's not good enough either.
Not to mention the hopelessly dangerous drivers. Some reckless in addition, just for the hell of it.
Might even be driving an 18-wheeler with some _interesting_ scenery painted in realistic scale on the trailer. That's always distracting but I don't think a machine will be processing this type of distraction very much like an ordinary person any time soon.
There will always be occasional vehicles rapidly assuming trajectory in a way completely unpredictable by either human or machine.
Perhaps also beyond the capability of either to avoid a tragic outcome.
Perhaps not, and what's the difference might be even more difficult to yield a predictable outcome.
I wonder if it's something like their out-of-bounds checks required true returns on a numeric comparison (e.g., if val < min ...) and since NaN-involved comparisons always return false...
So this appears to be, the very old, software that was pushed to production before it was ready story. I have to wonder if there wasn't some Marketing/PR/Mgmt person(s) that pushed the engineers to release the software before it was ready or was it the result of lack of due diligence on the part of the engineers, who couldn't wait to get their code running in the real world.
>We did implement checks for what seemed to us as more common failure scenarios, but the devil here was that this one first appeared during the run and we did not cover it at the analytical analysis stage.
I little more time in testing for the edge cases could have helped avoid this expensive fail.
You have to test for much more than just the "common failure scenarios", when you engineer solutions where failure could be very expensive in terms of money and lives.
"Ironically, [the NaN value] did show up on telemetry monitors, but it showed up along with 1.5k other telemetry values.... an NaN value is not a valid number, meaning that validation would not be performed on it."
I had a similar error happen while testing my autonomous vehicle several years ago. My GPS module was in simulate mode and wasn't giving correct data to the feedback loop that needed it. In the absence of correctly changing data, a runaway effect took over.
It's the same basic failure that crashed the 737 max.
When the feedback loop is broken in closed loop controls, failures like this happen.
One issue is that MATLAB is a hilariously inappropriate tool for any real software engineering, including robotics. What's worse is that it's just powerful enough to empower your traditional PE to be dangerous.
MATLAB's Stateflow toolkit is actually excellent for robotics engineering. Without more information, that MATLAB declaration in the article could mean many things.
[+] [-] fuzzfactor|5 years ago|reply
Sounds like the kind of stage where you might wonder if analytical uncertainty is being multiplied.
>And this, ladies and gentlemen, is why self-driving is hard.
That's not all, computer programming on its own can be pretty hard to do as outstandingly as average people drive their cars to begin with.
And that's just average and that's not good enough either.
Not to mention the hopelessly dangerous drivers. Some reckless in addition, just for the hell of it.
Might even be driving an 18-wheeler with some _interesting_ scenery painted in realistic scale on the trailer. That's always distracting but I don't think a machine will be processing this type of distraction very much like an ordinary person any time soon.
There will always be occasional vehicles rapidly assuming trajectory in a way completely unpredictable by either human or machine.
Perhaps also beyond the capability of either to avoid a tragic outcome.
Perhaps not, and what's the difference might be even more difficult to yield a predictable outcome.
[+] [-] fallingfrog|5 years ago|reply
Well it’s not like I’ve never made a mistake but that’s a funny sentence!
[+] [-] dragonwriter|5 years ago|reply
[+] [-] Adutude|5 years ago|reply
>We did implement checks for what seemed to us as more common failure scenarios, but the devil here was that this one first appeared during the run and we did not cover it at the analytical analysis stage.
I little more time in testing for the edge cases could have helped avoid this expensive fail.
You have to test for much more than just the "common failure scenarios", when you engineer solutions where failure could be very expensive in terms of money and lives.
[+] [-] 8bitsrule|5 years ago|reply
Uhhhmmmmmmm....
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] pontifier|5 years ago|reply
It's the same basic failure that crashed the 737 max.
When the feedback loop is broken in closed loop controls, failures like this happen.
[+] [-] EADGBE|5 years ago|reply
Man, I’ve been bitten by that one too many times.
[+] [-] mpoteat|5 years ago|reply
[+] [-] ebg13|5 years ago|reply
[+] [-] rasz|5 years ago|reply