Sensationalism is definitely the right word for this. The article talked more about "white-noise attacks" on NNs than anything else, but I've yet to hear of a white-noise attack that did anything worse than make a NN misidentify an object. Sure, in the right system, that could possibly wreak havoc, but right now, it's not much more than a parlor trick. Maybe if an attacker knew enough about their targeted model, they could have a little more control over the outcome, but that would require some white-box insight to the model. But just because it's possible to feed corrupted pictures into a NN until it breaks isn't enough to call this an "emerging security crisis".
doublekill|7 years ago
The parlor trick becomes dangerous to the powers that that be, when you start fooling surveillance and smart gun turrets or drones. This is already happening in the background. That is where the funding comes from, not a SV company fearing that their face filter does not work, but governments afraid their deep net border security will be rendered moot.
If anything the article is countering hype by citing researchers saying we don't really know how deep learning learns and represent objects, and that deep nets are a very weak copy of the human brain.