top | item 24793886

Hacked billboards make Teslas see phantom objects causing them to swerve or stop

199 points| alistairSH | 5 years ago |newsweek.com | reply

257 comments

order
[+] black_puppydog|5 years ago|reply
Anyone else getting strong "I, Robot" (the movie) vibes from this line of research?

I can see this happening as an opening shot to an action movie.

Some reporter finds out that $government is doing $bad_thing. They pack up the evidence, rush out the door to talk to their editor. A slick, self-driving car pulls up as they exit the building, they enter and pull away.

It is a rainy night, and the car zips along the city roads along thousands of others; just another regular evening. Cars passing each other at break-neck speeds with safety margins that are only really safe with positronic brains at the wheel.

We see the street, as seen by the computer. Bounding boxes and labels hovering over multiple viewpoint video feeds, all drawn in slick black & blue cyber aesthetic. Suddenly something changes at the left of the image, alerts go off.

Cut to the exterior, super slow motion shot of the care from back right angle, with a big billboard visible on the left side of the road. It shows cryptic patterns that seem to depict running children, but never the whole child, just portions of them, in changing shapes and positions, with neural network adversarial noise covering the rest of the image.

Cut back to the board computer's view, still slow motion. Red bounding boxes surrounding the billboard, and then a big red warning flashes in the center of the screen:

COLLISION IMMINENT.

All kinds of meters at the bottom of the dashboard start going wild as the car tries to find a safe exit state, and with the slow-motion slowly approaching realtime again. It swerves to the right. The image gets rocked as it gets hit by another passing car. Another hit. The world on screen turns as the car goes flying.

Static noise fills the screen.

Next scene:

A news broadcast announces that there was a high casualty in accident on Suchandsuch Road last night. Investigators find that it was most probably caused by a repair shop installing a faulty aftermarket AI module into a car.

[+] Unklejoe|5 years ago|reply
I have to admit, that was captivating. The part about cryptic images that resemble children gave me the chills for some reason.
[+] Dork_Sider|5 years ago|reply
I mean you've basically described the opening to Upload on Amazon.
[+] pritovido|5 years ago|reply
Billboards at night should be forbidden. Period.

I almost had an accident when a billboard that was dark suddenly displayed a very bright image on my side. My body reacted moving suddenly the steering wheel as a reflex action.

At another one having some big image being moved actually made my eyes follow the movement instead of the road with cars on it. I had to make work in order to focus on the road and not the screen.

It also makes your eyes to loose your vision in the dark.

[+] noodlesUK|5 years ago|reply
I suggest a simpler solution. Billboards (particularly the LED array ones) should be forbidden, period. They are explicitly designed to capture attention away from drivers whilst they are doing a very dangerous activity. Why? Because they attract customers and make businesses money. This kind of thing needlessly has a human toll, and we could eliminate it with the stroke of a pen.
[+] black_puppydog|5 years ago|reply
> Regularly operating billboards make Humans see phantom objects causing them to swerve or stop (somehow-no-newspaper-ever.com)
[+] proactivesvcs|5 years ago|reply
The entire concept of roadside advertising should be seriously looked at. The roads are dangerous enough as it is - especially with attention gradually being degraded by cockpit distractions - without superfluous equipment which is specifically designed to attract attention.
[+] morpheuskafka|5 years ago|reply
I think the concept of the line of death, from browser security UI design, is relevant here. The space outside of the highway right of way is completely uncontrolled and the Tesla should know better than to look at a billboard hundreds of feet up in the air for regulatory signs. The MUTCD has rules about where signs are to be placed--look only in those places.

[1] https://textslashplain.com/2017/01/14/the-line-of-death/

[+] godelski|5 years ago|reply
Someone else linked this[0] article that has a video of the attack. I think seeing the video will paint a different picture than what you have in your head. The "billboard" here is in a location that a stop sign would be, not your average highway billboard.

One would hope Tesla's system has some depth awareness. Honestly I'd be impressed how they could do so much if it didn't.

[0] https://www.wired.com/story/tesla-model-x-autopilot-phantom-...

[+] shajznnckfke|5 years ago|reply
The problem is that they don’t have lidar and therefore can’t reliably measure depth, right? A billboard far away high up in the air may appear in the same direction as a closer road sign if you don’t have depth perception. You should be able to infer it from a camera with the right model but those methods are more fallible. Similar root cause to that time the guy got beheaded by a white truck perpendicular to the highway that looked like the sky.
[+] raxxorrax|5 years ago|reply
If you don't evaluate everything your camera sees, you basically just decrease its FOV. Sure, that might net you less errors for objects that are most relevant most of the time.

But it is the unexpected we need to train computer vision for to compete with humans. We can have rules for things like billboards, sure, but those will never be completely implemented everywhere.

The software just needs to be improved to identify something as billboard.

[+] fyp|5 years ago|reply
There are a lot of similar failure modes in humans.

Sometimes they're are used for good, like making drivers slow down: https://www.insider.com/optical-illusions-3d-crosswalk-drivi...

But other times they cause crashes: https://imgur.com/a/kYr94

[+] Barrin92|5 years ago|reply
>There are a lot of similar failure modes in humans.

there are lot of similar failure modes in individual humans. The difference is individual driver errors are not correlated, a bad reflection tricks a few humans at once instead of tens of thousands of cars running the same model. There is significantly more brittleness in a fleet of cars than in a population of humans, because the human population has diversity in judgement and experience.

This is not simple to fix because this uniformity is actually a feature of automated systems, it makes them explainable, produces expected results and conform and cheap, training a human for 20 years is more expensive than training all the cars once. However it also makes them collectively vulnerable, which is why heavy machines tend to be locked away on factory floors.

[+] mcintyre1994|5 years ago|reply
I wonder if these self driving cars would recognise those 3D crossings as the same as the normal thing. I could see it looking quite different to a neural network or something looking for features and a bit overfitted.
[+] rtx|5 years ago|reply
Those examples in the first line are so bad, I hope they don't catchup.
[+] mattbee|5 years ago|reply
This is clever, and Tesla's software is awful, but anyone doing this for real, with intent to divert cars from the road etc. would be committing a pretty serious crime.
[+] Polylactic_acid|5 years ago|reply
The wonderful thing about IoT is the criminal can be on the other side of the world with no chance of any justice coming to them.
[+] jacobsenscott|5 years ago|reply
There are already billboards with pictures of stop signs, stop lights, etc on them. It doesn't take any criminal intent. Things that were once just billboards are suddenly hazards.
[+] accurrent|5 years ago|reply
To be fair I think autonomous vehicles should have a requirement that they come with a RADAR or LIDAR as a backup sensor (even if its a low resolution one). Using ML on images for everything is not going to solve your safety problems cause theres no way to gaurantee the system will not crashed when presented with a fake input. Its a darn sight harder to pretend to be farther away on a radar or lidar though. At the very least the autonomous vehicle should be able to gaurantee you don't crash into static obstacles.
[+] onion2k|5 years ago|reply
Maybe Lucid will an advert that deliberately makes Teslas stop in front of it.
[+] Balgair|5 years ago|reply
Why would that be a crime? The driver is ultimately responsible, not the car.
[+] helsinkiandrew|5 years ago|reply
I'm not sure that counts as a phantom object. That's something that looks like a stop sign, positioned where a stop sign could be.

Admittedly its only shown for half a second but I imagine a distracted driver who looked up suddenly could be equally fooled.

Having video ads on screens that drivers can see and be distracted by is the real problem.

[+] ip26|5 years ago|reply
Ok, so can we agree to get rid of billboards?
[+] jvanderbot|5 years ago|reply
Cars should not detect "obstacles". They should detect roads. They should not seek _exceptions_ to driveable space through classification- or segmentation-first strategies. They should only detect driveable space. Full stop.

You either detect a full-stopping-distance's worth of clear road through reliable (non visual) ranging with dense enough samples to preclude wheel-catching voids ... or you will continue to run into "objects" which slipped through your classifier.

I can accept reduction in this margin (e.g. in dense traffic flow) when it is first shown to work with margin.

[+] krainboltgreene|5 years ago|reply
> They should only detect driveable space.

I'm interested to hear how you decide what's drivable without detecting things that decide if roads are undrivable.

[+] __m|5 years ago|reply
This wouldn’t work well with moving obstacles. The road could be clear until a person enters the road within your full-stopping distance.
[+] hashtagmarkup|5 years ago|reply
How did your country eliminate all the hoofed ruminants? Why did they do it? Do you have a death wish?
[+] liquidify|5 years ago|reply
Why don't we build some experimental infrastructure that is designed specifically for self driving vehicles?
[+] aeroplain|5 years ago|reply
Having to invest is specialized infrastructure for self-driving cars defeats the whole point of them.

Any money spent on infrastructure for self-driving cars would be better spent on public transportation.

[+] jcrawfordor|5 years ago|reply
A number of dedicated test facilities for autonomous vehicles exist, some operated by universities, some by government agencies, and some of the autonomous vehicle vendors own their own (more common for those aligned with major automakers which generally already have large proving grounds, but Uber apparently has a small one).

I think the issue is that a dedicated test area will just never reflect the full set of problems you encounter in the real world, and for things like LED billboards it's going to be a lot cheaper to test in the real world than to buy and install your own. On the flip side, there's next to no money in developing "vehicle-dedicated infrastructure," so no one's that interested in testing on it exclusively.

[+] tpmx|5 years ago|reply
Like a railroad?

(Only 50% joking.)

[+] solarkraft|5 years ago|reply
I'f you put a fake road sign somewhere people are probably going to obey it as well.
[+] nojster|5 years ago|reply
That is likely, if the fake street sign were to resemble a real one both in appearance and context.

If it were a stop sign in a billboard ad, humans would ignore it, though.

[+] techbio|5 years ago|reply
I spent a second trying to figure out that apostrophe, and found it humorous, about the same as i'f I saw a stop sign on the shoulder of a freeway.
[+] jtsiskin|5 years ago|reply
Would they, if you put on in the middle of the interstate?
[+] maxkwallace|5 years ago|reply
This is a special case of a general class of vulnerabilities in AI models, where an adversary can cause undesired output from the model by constructing input data not represented in the training set. However, it is legit much more concerning than, e.g. the issue of image classification models mis-identifying well-constructed noise as "panda".

This is currently a research frontier for AI so us non-experts likely won't be able to say a ton about it.

I thought this was a good talk on the issue: https://www.youtube.com/watch?v=SS9DMr4VkbY

[+] mike_d|5 years ago|reply
States just need to start mandating LIDAR for any level of autonomous certification, in the same way we finally figured out seatbelts were the minimum level of safety for passenger retention.
[+] GhostVII|5 years ago|reply
I always find these things over hyped. Like yes, you can trick an autonomous car into stopping or crashing. But you can also trick a human into stopping or crashing by running into traffic with a construction hat and stop sign, or dropping rocks from an overpass. The reality is that road safety relies on the vast majority of people agreeing to not kill each other.
[+] klyrs|5 years ago|reply
Hacked billboards are hard to come by. We've already seen signs that autopilot can be tricked by malicious lines on a road.

Not that long 'til somebody points a projector at the ground in front of a Tesla and it crosses into oncoming traffic. You could even mount that projector on a panel van, and use image recognition to target Teslas and otherwise stay inactive

[+] shultays|5 years ago|reply
I mean if I see a stop sign on a billboard I would stop as well, I wouldn't consider this as Tesla's fault.
[+] cblconfederate|5 years ago|reply
Anything with software sadly becomes a weapon. Especially when its heavy bulky and potentially deadly.

There will be a lot of money thrown on adversarial attack mitigation. But how much is too much? At some point it might become so expensive, that it will be cheaper to switch to guiderail-based self driving, which can be overseen and secured centrally. And self-driving planes are probably going to be a nonstarter

[+] wazoox|5 years ago|reply
My wife nearly had an accident because of a huge ad billboard that suddenly displayed huge roaring flames, to advertise for some steakhouse.

My general feeling is that these sort of intrusive, dangerous, anti-ecological (such a billboard consumes as much power as a family of 4), anti-social (one less worker to change billboards) ads should be banned, period.

[+] zimpenfish|5 years ago|reply
I worked at a place trying to do roadside LED billboards (~2009 before they were common) and the rules about content (in the UK, at the time) were fairly strict - nothing animated, no cuts only fades even between distinct adverts, etc. From what I've seen of the fancy new ones, those rules seem to be mostly still around (in London anyway.)
[+] chungus_khan|5 years ago|reply
They also needlessly drive up land value around major roads, and I don't feel much sympathy for companies that subsist off of simply owning said eyesores.
[+] kalium-xyz|5 years ago|reply
What do you think about e-ink billboards?
[+] NiceWayToDoIT|5 years ago|reply
Can this simply be prevented by sign edge detection? Are there standard traffic sign dimensions? If new Tesla can detect depth, then it should be possible to understand what is billboard and to ignore it. I do not know laws in America is it possible to display real traffic signs digitally in billboards?