The responses on HN to this facial recognition technology vs China’s facial recognition technology is mind-boggling. Commenters saw the Chinese tech as dystopian (rightly so), but yet see this technology as a way to “to make sure we're getting the right people“, but that we still might want to think about how its use could eventually go “too far”.
If China’s facial recognition system is currently “too far”, how is this tech not also already too far? I guess if a technology is only used to recognize and assassinate foreign nationals, and not surveil citizens (which it will eventually be used to do), most Americans are okay with it. Some commenters are critical of this research, but the level of concern in these comments is way less than on posts about similar Chinese systems.
The point isn’t that this tech could increase accuracy and kill somewhat fewer civilians compared to the current amount of civilians killed regularly by U.S. drone and air attacks around the world. The entire basis for this activity - shooting missiles into civilian areas thousands of miles from home in endless wars - is the issue. The fact that the military sees a use for this kind of technology is the core of the problem, and no matter how well it works, it will only increase the efficiency of assassinations performed by the U.S. military, not abolish them.
“Fusion of an established identity and information we know about allows us to decide and act with greater focus, and if needed, lethality,” the DFBA’s director wrote in presentation notes last year.
It also opens up the possibility of weaponry optimized for an individual target's physical and mental weaknesses, personalized propaganda, and attacks on people's social connections, including noncombatants.
Framing is key for how you think about these military technologies.
You could frame this as the government making it possible to kill people in the dark automatically or as another data input to a vast array of data sources used to make life/death decisions for high-value military targets.
The technology has the potential to make sure we're getting the right people but almost certainly its use will be pushed too far. It has a utility and we should rightly be concerned that it doesn't get used outside of its limited intended application.
"We kill people based on metadata" - General Michael Hayden
Also this bit from https://en.wikipedia.org/wiki/Civilian_casualties_from_U.S._...: "Between 2009 and 2015, out of 473 strikes between 64–116 non-combatant deaths occurred. However during that period, the Obama Administration did count all military-age males in strike zones as combatants unless explicit intelligence exonerated them posthumously."
I don't think I trust the US military to determine what a "high-value target" is based on their track record. I also don't think more data will help, since the one of the basic steps for understanding data is to understand how limited and/or detailed the dataset you are working with actually is. If they haven't clearly understood the dataset they are working with now, there is basically no chance that more data will help. There needs to be a culture shift, not just a refinement.
Well you mentioned framing, but you aren't even a little worried that the US military continues to kill thousands of people by drone strike with total impunity across several undeclared war zones? This isn't a theoretical. This is going on now. This technology will be used to kill people with absolutely zero accountability. Maybe we should strive for a planet where extra-judicial killings are hyper-efficient isn't where we invest our best minds and our money. I reject the framing where we have to accept this.
Given the current state of best of breed, up close facial recognition, using the term "works" in a life and death scenario is an irresponsible overreach.
"Fail early, kill innocents often" is a terrible paradigm.
Generalized systems search massive databases, these systems can have much narrower data sets. "Who is X?" and "Is this X?" are very different computationally.
Also; "fail early, kill innocents" is a great paradigm for staying in power.
Doesn't have to be perfect, just has to be better than human performance. I heard a lot of stories out of Afghanistan and Iraq that ended up boiling down to "They had a big thing on their shoulder and it was pointed at a tank so we had to kill them", nevermind that that was a TV camera one in ten times.
I'm sorry, but I couldn't find specific reference to where they were saying this tech would SOLELY be used in "life and death scenarios" or be linked to any sort of "kinetic action"
The only mention which comes close:
> “Fusion of an established identity and information we know about allows us to decide and act with greater focus, and if needed, lethality,”
"Fusion" is military parlance for "we would use a variety of sensor inputs and systems" to make inferences. So, this would likely be only one component of many others used to determine identity and/or hostile intent.
Tangential: That face is not the one I would have pictured when looking at the IR image. It looks like some weird "white-hot" version of IR plus ambient lighting, and once transformed, lost the mustache entirely.
I'm sure there's a reluctance to put out the real capabilities directly, but I'm also sure there's a reluctance to put out the real weaknesses directly.
Curious if this will work.
About 1 year ago I was reading about US military research looking for a portable/personnel use device capable to combine night vision and thermal vision in one vision set - don't think they managed to get a breakthrough with that. This face detection would work great if ported on such a device.
Does anyone know of ways to combat these systems, something like a special pattern that makes it hard to read. China is leading the way in facial recognition, and I'd be surprised if there aren't any countermeasures available.
It depends on the scenario. If all you’re trying is stop it from working, the article says it’s IR based so jamming the sensor (the camera) with a lot of IR radiation can work quite nicely.
Individually you could do this with bright IR emitting LEDs. On the scale of the battlefield, a cool trick that is already being done today on yachts of the rich and private is using a laser to shine a lot of light directly on the CMOS (sensor element) of the capturing device when the shutter opens.
These methods work but they don’t hide what they’re doing (jamming). It would be instantly obvious as to what you were doing which would be OK on the battlefield if you’re not trying to hide your position, not so much in China.
I seem to recall the Terminator UI did all this before deciding what to do. Is that our future, killer robots wondering around killing suspected "terrorists" or other undesirables? I suppose if you combine China's social score data collection and this tech with Judge Dredd like robots, our society will turn out like a kind of Minority Report where data leads to pre-crime termination.
The future is now. Drone assassination had become a common tactic for killing suspected. The tactic is pretty effective and it will not go away, no matter with or without AI.
When killing the "bad guys" is more important than feeding your own people. I love how they use the term "target" instead of "person", they're not even trying to hide it.
[+] [-] csb6|6 years ago|reply
If China’s facial recognition system is currently “too far”, how is this tech not also already too far? I guess if a technology is only used to recognize and assassinate foreign nationals, and not surveil citizens (which it will eventually be used to do), most Americans are okay with it. Some commenters are critical of this research, but the level of concern in these comments is way less than on posts about similar Chinese systems.
The point isn’t that this tech could increase accuracy and kill somewhat fewer civilians compared to the current amount of civilians killed regularly by U.S. drone and air attacks around the world. The entire basis for this activity - shooting missiles into civilian areas thousands of miles from home in endless wars - is the issue. The fact that the military sees a use for this kind of technology is the core of the problem, and no matter how well it works, it will only increase the efficiency of assassinations performed by the U.S. military, not abolish them.
[+] [-] philwelch|6 years ago|reply
[deleted]
[+] [-] ilamont|6 years ago|reply
It also opens up the possibility of weaponry optimized for an individual target's physical and mental weaknesses, personalized propaganda, and attacks on people's social connections, including noncombatants.
[+] [-] tastygreenapple|6 years ago|reply
[deleted]
[+] [-] dumbfoundded|6 years ago|reply
You could frame this as the government making it possible to kill people in the dark automatically or as another data input to a vast array of data sources used to make life/death decisions for high-value military targets.
The technology has the potential to make sure we're getting the right people but almost certainly its use will be pushed too far. It has a utility and we should rightly be concerned that it doesn't get used outside of its limited intended application.
[+] [-] SahAssar|6 years ago|reply
Also this bit from https://en.wikipedia.org/wiki/Civilian_casualties_from_U.S._...: "Between 2009 and 2015, out of 473 strikes between 64–116 non-combatant deaths occurred. However during that period, the Obama Administration did count all military-age males in strike zones as combatants unless explicit intelligence exonerated them posthumously."
I don't think I trust the US military to determine what a "high-value target" is based on their track record. I also don't think more data will help, since the one of the basic steps for understanding data is to understand how limited and/or detailed the dataset you are working with actually is. If they haven't clearly understood the dataset they are working with now, there is basically no chance that more data will help. There needs to be a culture shift, not just a refinement.
[+] [-] titzer|6 years ago|reply
[+] [-] blunte|6 years ago|reply
If you look at the US "leadership", surely anyone who is reading HN still has enough mental capacity to worry about that kind of judicial power.
[+] [-] justforyou|6 years ago|reply
"Fail early, kill innocents often" is a terrible paradigm.
[+] [-] JackRabbitSlim|6 years ago|reply
Also; "fail early, kill innocents" is a great paradigm for staying in power.
The beatings will continue until moral improves.
[+] [-] saulrh|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] Ididntdothis|6 years ago|reply
[+] [-] dfsegoat|6 years ago|reply
The only mention which comes close:
> “Fusion of an established identity and information we know about allows us to decide and act with greater focus, and if needed, lethality,”
"Fusion" is military parlance for "we would use a variety of sensor inputs and systems" to make inferences. So, this would likely be only one component of many others used to determine identity and/or hostile intent.
[+] [-] sudoaza|6 years ago|reply
[+] [-] jvanderbot|6 years ago|reply
I'm sure there's a reluctance to put out the real capabilities directly, but I'm also sure there's a reluctance to put out the real weaknesses directly.
[+] [-] mason55|6 years ago|reply
Isn't this what you want? A system that could be fooled by a little facial hair seems like it would be pretty useless.
[+] [-] kevin_thibedeau|6 years ago|reply
[+] [-] superbrane|6 years ago|reply
[+] [-] yayajacky|6 years ago|reply
Reminds me of Captain America: Winter soldier https://www.youtube.com/watch?v=3ru5wM7fl7g
"Deploy the algorithm. Algorithm deployed"
[+] [-] Aaronstotle|6 years ago|reply
[+] [-] xnyan|6 years ago|reply
Individually you could do this with bright IR emitting LEDs. On the scale of the battlefield, a cool trick that is already being done today on yachts of the rich and private is using a laser to shine a lot of light directly on the CMOS (sensor element) of the capturing device when the shutter opens.
These methods work but they don’t hide what they’re doing (jamming). It would be instantly obvious as to what you were doing which would be OK on the battlefield if you’re not trying to hide your position, not so much in China.
[+] [-] polycaster|6 years ago|reply
Look N° 1 could fit very well in a military context I believe. Look + 3 however required at least some urban setting.
Also not sure if some of this measures would work with thermal imagery.
[+] [-] nmstoker|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] EGreg|6 years ago|reply
[+] [-] akeck|6 years ago|reply
[+] [-] zimmertr|6 years ago|reply
[+] [-] agumonkey|6 years ago|reply
[+] [-] coldcode|6 years ago|reply
Not a future I want to be in.
[+] [-] roywiggins|6 years ago|reply
https://www.vice.com/en_us/article/d738aq/us-drones-target-t...
[+] [-] 188201|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] lm28469|6 years ago|reply
[+] [-] colechristensen|6 years ago|reply
Corollary: not until widespread use and abuse highlights the danger.
[+] [-] UnFleshedOne|6 years ago|reply
[+] [-] jellicle|6 years ago|reply