top | item 22056194

The military is building long-range facial recognition that works in the dark

104 points| coryodaniel | 6 years ago |onezero.medium.com

75 comments

order
[+] csb6|6 years ago|reply
The responses on HN to this facial recognition technology vs China’s facial recognition technology is mind-boggling. Commenters saw the Chinese tech as dystopian (rightly so), but yet see this technology as a way to “to make sure we're getting the right people“, but that we still might want to think about how its use could eventually go “too far”.

If China’s facial recognition system is currently “too far”, how is this tech not also already too far? I guess if a technology is only used to recognize and assassinate foreign nationals, and not surveil citizens (which it will eventually be used to do), most Americans are okay with it. Some commenters are critical of this research, but the level of concern in these comments is way less than on posts about similar Chinese systems.

The point isn’t that this tech could increase accuracy and kill somewhat fewer civilians compared to the current amount of civilians killed regularly by U.S. drone and air attacks around the world. The entire basis for this activity - shooting missiles into civilian areas thousands of miles from home in endless wars - is the issue. The fact that the military sees a use for this kind of technology is the core of the problem, and no matter how well it works, it will only increase the efficiency of assassinations performed by the U.S. military, not abolish them.

[+] ilamont|6 years ago|reply
“Fusion of an established identity and information we know about allows us to decide and act with greater focus, and if needed, lethality,” the DFBA’s director wrote in presentation notes last year.

It also opens up the possibility of weaponry optimized for an individual target's physical and mental weaknesses, personalized propaganda, and attacks on people's social connections, including noncombatants.

[+] dumbfoundded|6 years ago|reply
Framing is key for how you think about these military technologies.

You could frame this as the government making it possible to kill people in the dark automatically or as another data input to a vast array of data sources used to make life/death decisions for high-value military targets.

The technology has the potential to make sure we're getting the right people but almost certainly its use will be pushed too far. It has a utility and we should rightly be concerned that it doesn't get used outside of its limited intended application.

[+] SahAssar|6 years ago|reply
"We kill people based on metadata" - General Michael Hayden

Also this bit from https://en.wikipedia.org/wiki/Civilian_casualties_from_U.S._...: "Between 2009 and 2015, out of 473 strikes between 64–116 non-combatant deaths occurred. However during that period, the Obama Administration did count all military-age males in strike zones as combatants unless explicit intelligence exonerated them posthumously."

I don't think I trust the US military to determine what a "high-value target" is based on their track record. I also don't think more data will help, since the one of the basic steps for understanding data is to understand how limited and/or detailed the dataset you are working with actually is. If they haven't clearly understood the dataset they are working with now, there is basically no chance that more data will help. There needs to be a culture shift, not just a refinement.

[+] titzer|6 years ago|reply
Well you mentioned framing, but you aren't even a little worried that the US military continues to kill thousands of people by drone strike with total impunity across several undeclared war zones? This isn't a theoretical. This is going on now. This technology will be used to kill people with absolutely zero accountability. Maybe we should strive for a planet where extra-judicial killings are hyper-efficient isn't where we invest our best minds and our money. I reject the framing where we have to accept this.
[+] blunte|6 years ago|reply
Who defines a "high-value military target" though?

If you look at the US "leadership", surely anyone who is reading HN still has enough mental capacity to worry about that kind of judicial power.

[+] justforyou|6 years ago|reply
Given the current state of best of breed, up close facial recognition, using the term "works" in a life and death scenario is an irresponsible overreach.

"Fail early, kill innocents often" is a terrible paradigm.

[+] JackRabbitSlim|6 years ago|reply
Generalized systems search massive databases, these systems can have much narrower data sets. "Who is X?" and "Is this X?" are very different computationally.

Also; "fail early, kill innocents" is a great paradigm for staying in power.

The beatings will continue until moral improves.

[+] saulrh|6 years ago|reply
Doesn't have to be perfect, just has to be better than human performance. I heard a lot of stories out of Afghanistan and Iraq that ended up boiling down to "They had a big thing on their shoulder and it was pointed at a tank so we had to kill them", nevermind that that was a TV camera one in ten times.
[+] Ididntdothis|6 years ago|reply
"Fail early, kill innocents often" is perfectly acceptable in foreign countries especially if the population there is viewed as backwards in some way.
[+] dfsegoat|6 years ago|reply
I'm sorry, but I couldn't find specific reference to where they were saying this tech would SOLELY be used in "life and death scenarios" or be linked to any sort of "kinetic action"

The only mention which comes close:

> “Fusion of an established identity and information we know about allows us to decide and act with greater focus, and if needed, lethality,”

"Fusion" is military parlance for "we would use a variety of sensor inputs and systems" to make inferences. So, this would likely be only one component of many others used to determine identity and/or hostile intent.

[+] sudoaza|6 years ago|reply
Soon you'll be droned by face recognition, oh the future!
[+] jvanderbot|6 years ago|reply
Tangential: That face is not the one I would have pictured when looking at the IR image. It looks like some weird "white-hot" version of IR plus ambient lighting, and once transformed, lost the mustache entirely.

I'm sure there's a reluctance to put out the real capabilities directly, but I'm also sure there's a reluctance to put out the real weaknesses directly.

[+] mason55|6 years ago|reply
> and once transformed, lost the mustache entirely.

Isn't this what you want? A system that could be fooled by a little facial hair seems like it would be pretty useless.

[+] superbrane|6 years ago|reply
Curious if this will work. About 1 year ago I was reading about US military research looking for a portable/personnel use device capable to combine night vision and thermal vision in one vision set - don't think they managed to get a breakthrough with that. This face detection would work great if ported on such a device.
[+] Aaronstotle|6 years ago|reply
Does anyone know of ways to combat these systems, something like a special pattern that makes it hard to read. China is leading the way in facial recognition, and I'd be surprised if there aren't any countermeasures available.
[+] xnyan|6 years ago|reply
It depends on the scenario. If all you’re trying is stop it from working, the article says it’s IR based so jamming the sensor (the camera) with a lot of IR radiation can work quite nicely.

Individually you could do this with bright IR emitting LEDs. On the scale of the battlefield, a cool trick that is already being done today on yachts of the rich and private is using a laser to shine a lot of light directly on the CMOS (sensor element) of the capturing device when the shutter opens.

These methods work but they don’t hide what they’re doing (jamming). It would be instantly obvious as to what you were doing which would be OK on the battlefield if you’re not trying to hide your position, not so much in China.

[+] polycaster|6 years ago|reply
Well, there is this unpretentious approach: https://cvdazzle.com/

Look N° 1 could fit very well in a military context I believe. Look + 3 however required at least some urban setting.

Also not sure if some of this measures would work with thermal imagery.

[+] akeck|6 years ago|reply
A possible result is that adversaries will deploy robots that all emit the same radiation patterns in lieu of identifiable persons.
[+] zimmertr|6 years ago|reply
I'm not sure I'd consider this a possibility as much as a guaranteed evolution of defense.
[+] agumonkey|6 years ago|reply
Evolutionary pressure will now make chameleon genes reappear with morphogenesis located on our faces
[+] coldcode|6 years ago|reply
I seem to recall the Terminator UI did all this before deciding what to do. Is that our future, killer robots wondering around killing suspected "terrorists" or other undesirables? I suppose if you combine China's social score data collection and this tech with Judge Dredd like robots, our society will turn out like a kind of Minority Report where data leads to pre-crime termination.

Not a future I want to be in.

[+] 188201|6 years ago|reply
The future is now. Drone assassination had become a common tactic for killing suspected. The tactic is pretty effective and it will not go away, no matter with or without AI.
[+] lm28469|6 years ago|reply
When killing the "bad guys" is more important than feeding your own people. I love how they use the term "target" instead of "person", they're not even trying to hide it.
[+] colechristensen|6 years ago|reply
Prediction: the 21st century will see computer aided technology for identification and tracking banned like chemical weapons in the 20th.

Corollary: not until widespread use and abuse highlights the danger.

[+] UnFleshedOne|6 years ago|reply
Likely won't be banned, but limited in scope (so no autonomous firing, but yes robocop style visual field highlights, etc)
[+] jellicle|6 years ago|reply
The only real application here is killing people with drone-launched missiles.