It seems to me this is not a facial recognition story but a story about how the state, with very little evidence, can ruin your life and leave you with very little recourse.
As concerned as I am about state use of facial recognition, these kind of things have been happening to people for decades now. The solution to this problem is to better secure the rights of people accused of crimes.
We need to ensure that people accused do not lose their jobs and homes. That they have fair opportunity to communicate with counsel and family (both are often needlessly limited by DoC). And the state must be responsible to mitigate its mistakes.
Bail requirements are too often granted as par for the course and need a "burden of proof" type rethinking. Holding people at home should be required before incarceration. Lastly we need to remove the power of the legislatures to unequally empower prosecutors and public defense attorneys.
> It seems to me this is not a facial recognition story but a story about how the state, with very little evidence, can ruin your life and leave you with very little recourse.
The facial recognition is relevant, as they relied on facial recognition software, though it is apparently illegal in New Jersey.
From the article as well:
> He asked for a lawyer, then was taken to a hallway where he was handcuffed to a bench. About an hour later, the officers – he counted seven – told him they were going to take him to a different room for more questions.
As far as I understand U.S.A. law, when a suspect ask for a lawyer, the interrogation is to stop immediately, and a lawyer must be provided or he must be allowed to call one, and the lawyer must be præsent ere the interrogation continue.
The real takeaway seems to be that the police did something illegal.
The rest of your post is about many more rules, but the issue is that the rules were broken here.
>It seems to me this is not a facial recognition story
Maybe there should be consequences if you sell bad software that has bad consequences ? There are a lot of greedy bastards that will knowingly sell bad stuff but minimize the issues, why not have good standards like real professionals and say scientists where they need to prove that the thing they discovered is correct with a big confidence level. With AI there is no proof that it works correctly or the sell department is bullshitting a ton of claims.
Or just take the German approach and reimburse people who aren't convicted as well as guarantee everything else you would lose in the US in that event as well. Which is entirely why their court system is far more choosey over who they take to court. Unlike the US which is basically a lottery where the odds are much much higher than any powerball and have terrible consequences if you win.
That's nothing: Steven Talley was identified by the FBI as the primary suspect in two bank robberies using a facial recognition algorithm. He had an iron-clad alibi, but the police and FBI weren't convinced. In court one of the bank tellers said Talley definitely wasn't the robber. Nonetheless Talley lost his job, his wife and his family and was held in prison for months:
"LOSING FACE: How a Facial Recognition Mismatch Can Ruin Your Life"
"Talley said he was held for nearly two months in a maximum security pod and was released only after his public defender obtained his employer’s surveillance records. In a time-stamped audio recording from 11:12 a.m. on the day of the May robbery, Talley could be heard at his desk trying to sell mutual funds to a potential client."
Today Talley is still trying to claw his way back to normalcy.
In the outstanding book "Hello World" Hannah Fry examines Talley's story as part of a chapter on crime, AI and facial recognition. Reading Fry's book convinced me that facial recognition software simply does not work well enough to use in police work. As Fry says:
"If you're searching for a particular criminal in digital line-up of millions...the best-case scenario is that you won't find the right person one in six times...". That is not nearly good enough for law enforcement and the courts.
The problem appears to be that police and prosecutors are either morons or incentivized so strongly to arrest and convict that they ignore obvious nonsense.
Given that this is the case, the use of AI cannot be accepted by any members of the US justice system. If I had access to a facial recognition system, I would use it as a whittling down tool, to reduce the amount of hay to find the needle in. But they clearly see a straw and conclude that it is a needle.
It's like giving a four year old a handgun. It's just not responsible. They're just not as good at this as I am. So no handgun for you.
It's a pity. Ideally you should be able to discriminate, and intelligent police should be given the tool, and stupid police not. However, because stupid police and smart police exist in an emulsion, it is better to just waste the time of the smart police than to give the stupid police too powerful a tool.
When the state doesn't have to foot the bill taking you to court, of course they don't care. If the procsecutor had to pay all your legal fees if they lost, we'd either get some pretty amazing state prosecutors or ones who who only accept evidence from cops that can convince a judge.
It's almost as if you stop cops from just locking people up it creates an immense snowball effect on the whole system...
> If I had access to a facial recognition system, I would use it as a whittling down tool, to reduce the amount of hay to find the needle in. But they clearly see a straw and conclude that it is a needle.
It's like giving a four year old a handgun. It's just not responsible. They're just not as good at this as I am. So no handgun for you.
Just wanted to say I've never heard such an argument worded so well. What is the course of action for the common man other than verbalizing such thoughts to local city councils and law enforcement?
Perhaps USians as desensitized to guns but I took pause when reading that the police officers drew their guns when chasing the fugitive in the original incident, about shoplifting candies from a hotel lounge? (And I wonder if the concierge that called the cops on this guy hadn’t provoked a taunt with some “kind words”.)
Americans are, as a rule, highly authoritarian. You'll see it online as people use pseudo-bureaucratic corpo-speak to justify state violence: "refusal to obey a lawful order", "did not comply with a lawful request", "in violation of state regulation" etc.
Having lived elsewhere and in the US (a country I practically adore), I suspect it is because Americans have not really had true authoritarianism in any sense at any point in their history. Every time they stray close to the darkness and walk away unscathed and it convinces them that they can never really be consumed by it so they're willing to walk its very edge. Perhaps the UK having a monarch and a state religion is a visible reminder of what a monstrosity the state can be.
It's funny that both nations have populations that support the massive expansion of the state into human affairs: it's just that the Americans want it to enforce behaviour and the British want it to redistribute wealth.
The most amusing manifestation of the American comfort with violence is the difference in how streakers are handled in British football games and American football games. The former have overweight stewards lumber after naked people, failing repeatedly to apprehend, eventually accompanying them off the field. The latter have large, muscular guards catch up to and violently tackle the streaker to the ground.
It was about 10 days after the incident that he was in custody. What did the 3 eye witnesses say? Did they make positive identifications? Did the officers involved in the initial incident claim he was the same person? Were any of the 7 officers the same ones who encountered him in the hotel? How about the hotel staffer? Did they make a positive ID? Was 'Jamal' a hotel guest?
Seems like there should be a discussion about all of this in the article.
Facial recognition as the sole evidence must not be used to make arrests.
That part boggled my mind. He had someone who could prove he was not who they were looking for. How the hell can the system just go ahead and charge you for something even though there is perfect evidence stating you are not who they are looking for!?
Does Clearview AI train on mugshot datasets specifically?
I'm absolutely NOT advocating for its use at all, but it seems like that might be a good source for well-tagged, organized data for what is turning out to be a very problematic (though apparently allowed) use case.
From a photography standpoint, it seems to follow that identical lighting in the same room where they're taking everyone's mugshots could produce different levels of contrast for facial features across different skin tones, simply as an artifact of cheap lenses and a lack of white-balancing or overall care, really.
If persons A and B have different skin tones, then the routine, careless, terrible, one-size-fits-all black-and-white photo by an underpaid government office worker may not accurately capture the contrast in the shadows of both of their facial features to the same degree, no futile human bias required.
This lack of definition may then further help promote any existing biases in ML training, enforcement, etc. towards people whose features aren't as well contrasted in the resultant, awful photos. Perhaps training on such a grainy, washed-out dataset would at least help the ML distinguish smaller variances in contrast to a finer degree, if nothing else.
Us humans can do it, after all.
Perhaps it's just my schadenfreude acting up but I pray to sweet baby Jesus that they try to bust somebody with this. I'd love to see a whole bunch of engineers from Clearview AI get a proper shellacking on the stand from a criminal defense attorney. I don't see how anything from this product could be used as evidence without the algorithms being questioned or exposed in court.
Just out of curiosity - if the AI software relies on utilizing images scraped from the web, could one conceivably hit the developer with copyright infringement charges?
I wish I could say that I was shocked that the victim of bad facial recognition is black. I wish I could say that I'm shocked that authorities take the word of software over looking at the evidence and thinking, "umm, doesn't look like him..." Or perhaps someone did look and said, "meh, they all look the same to me."
All I know is that I'm highly skeptical that this white guy would suffer the same fate, even if I were to have a similar criminal record. Facial recognition seems to be the polygraph for a new century. But unlike a polygraph, it mostly false positives only for dark-skinned people. While I'm not quite ready to allow my cynicism to let me think it is an intentional feature, I question how many are demanding a fix.
The last thing police need are more tools that don't work and they don't understand how to use. That they still use polygraphs should make everyone's hair stand on end.
> All I know is that I'm highly skeptical that this white guy would suffer the same fate, even if I were to have a similar criminal record.
It is not clear to me that a prosecutor's office which has demonstrated that it is not interested in justice, would have cared about the color of skin in the beginning. They're after low-hanging fruit and maybe establishing facial recognition as 'proof enough'. A poor, white ex-con who can't afford a lawyer might just take a plea deal. I think the point of possible divergence is when the guy lawyer-ed up.
> I'm not quite ready to allow my cynicism to let me think it is an intentional feature
Hanlon's razor is hinted at in the later discussion, i.e.
"never attribute to malice that which is adequately explained by incompetence" (at least, both the words incompetence and malice appear under discussion)
Are we looking at the same photos? They look uncannily alike to me, to the point where I'm left wondering if the forged license actually reuses a photo from Parks. (As a double convict in NJ, I suspect his earlier mugshots would not be hard for the forger to get. Certainly no harder than forging a license.)
When is going to be a time when everybody understands that these algorithms are statistical and quite often wrong? It is always a surprise to me when I talk to somebody who does not understand neural nets or current state of ML algos and explain that it is just statistics and you cannot expect these systems to be 100% correct ever.
You might be surprised if you ever worked in B2B or B2C sales. It's not uncommon to apply a "spin" to effectively set unreasonable expectation for your client, whether they be an average joe or a law enforcement agency.
The problem comes in the expectations set based on other industries, where correctness actually matters both ethically and in terms of legal liability.
In Shenzhen, starting in the last couple years if you jaywalk your face is automatically recognized and the fine is automatically deducted from your bank account. It's extremely accurate due to full access to corroborating location/transit/payment data. This happens within 20 seconds, and even includes expats. Your face was permanently indexed during the passport check.
Such an automated system at first glance may seem like it reduces on-the-ground human flaws and bias, but it actually enables far deeper corruption. Those with administrative control would be able to selectively highlight or concoct any sort of offense to undermine their personal or political opponents.
"Show me the man and I'll show you the crime" - Lavrentiy Beria, Stalin's Chief of Secret Police
FWIW I live close to Shenzhen and have a company there and have never heard of this system actually being applied except through media announcements. I am skeptical it is widely deployed as jaywalking is rife here in nearby cities and in fact believe it is probably largely a proactive media announcement from a solutions provider to local police who is claiming to have better capabilities than they really do. Of course it will happen eventually but in urban areas globally we already carry cellphones and drive cars with number plates, so privacy left the arena some time ago...
Maybe in the short term we will see a push-back of all-weather snow-busting bicycle-riding mesh networkers in Berlin or something, but I think the global trajectory is clear. What we really need is an alternative to commercial cellular.
Are the creators of these algorithms penalized for their false positives?
I think the confidence levels returned by these algorithms would be quickly calibrated if there was some strict penalty for high-confidence false positives.
I would prefer that there is a strict penalty for people who rely too much on their tool that they can't even do their jobs correctly. I'd fire a carpenter if an AI told the carpenter to fuck up my cabinet, because I expect my carpenter to double-check. Similarly if the carpenter finds the AI consistently useless and an avenue to get their ass sued, then they'll decline to purchase shit AI.
This system is saying, "of the 10 million people we have in our database, this one looks like the photo you gave us."
It is the police and prosecutor who are asserting, "this person did it and should be held in jail."
This tool is a visual equivalent of looking up someone's name and is the visual equivalent of the no-fly-list being used as though no two people share the same name.
"The software, which was created by Clearview Al, was criticized for its heavy reliance on billions of social media photos to identify criminal suspects."
So how exactly did they get access to all photos? It must have been trained on public networks like instagram, not facebook, right?!? If it was FB that would be interesting.
> "Facebook notably has not sent a formal cease-and-desist letter but claims to have sent other letters to Clearview to request more detail on its practices and then eventually “demanded” that it stop scraping user data. Peter Thiel, a venture capitalist and notable surveillance enthusiast who sits on Facebook’s board of directors, invested $200,000 in Clearview’s first round of funding."
>...notified Woodbridge police they had a “high profile” match to the photo,...
That's weird. You would think a facial recognition system would return hundreds of matches in a case where it was searching through a whole lot of images. What does "high profile" mean in this case?
[+] [-] DubiousPusher|5 years ago|reply
As concerned as I am about state use of facial recognition, these kind of things have been happening to people for decades now. The solution to this problem is to better secure the rights of people accused of crimes.
We need to ensure that people accused do not lose their jobs and homes. That they have fair opportunity to communicate with counsel and family (both are often needlessly limited by DoC). And the state must be responsible to mitigate its mistakes.
Bail requirements are too often granted as par for the course and need a "burden of proof" type rethinking. Holding people at home should be required before incarceration. Lastly we need to remove the power of the legislatures to unequally empower prosecutors and public defense attorneys.
[+] [-] Blikkentrekker|5 years ago|reply
The facial recognition is relevant, as they relied on facial recognition software, though it is apparently illegal in New Jersey.
From the article as well:
> He asked for a lawyer, then was taken to a hallway where he was handcuffed to a bench. About an hour later, the officers – he counted seven – told him they were going to take him to a different room for more questions.
As far as I understand U.S.A. law, when a suspect ask for a lawyer, the interrogation is to stop immediately, and a lawyer must be provided or he must be allowed to call one, and the lawyer must be præsent ere the interrogation continue.
The real takeaway seems to be that the police did something illegal.
The rest of your post is about many more rules, but the issue is that the rules were broken here.
[+] [-] simion314|5 years ago|reply
Maybe there should be consequences if you sell bad software that has bad consequences ? There are a lot of greedy bastards that will knowingly sell bad stuff but minimize the issues, why not have good standards like real professionals and say scientists where they need to prove that the thing they discovered is correct with a big confidence level. With AI there is no proof that it works correctly or the sell department is bullshitting a ton of claims.
[+] [-] MeinBlutIstBlau|5 years ago|reply
[+] [-] js2|5 years ago|reply
Buttle? Tuttle?
https://vimeo.com/224785800
[+] [-] flatus|5 years ago|reply
"LOSING FACE: How a Facial Recognition Mismatch Can Ruin Your Life"
https://theintercept.com/2016/10/13/how-a-facial-recognition...
FTFA:
"Talley said he was held for nearly two months in a maximum security pod and was released only after his public defender obtained his employer’s surveillance records. In a time-stamped audio recording from 11:12 a.m. on the day of the May robbery, Talley could be heard at his desk trying to sell mutual funds to a potential client."
Today Talley is still trying to claw his way back to normalcy.
In the outstanding book "Hello World" Hannah Fry examines Talley's story as part of a chapter on crime, AI and facial recognition. Reading Fry's book convinced me that facial recognition software simply does not work well enough to use in police work. As Fry says:
"If you're searching for a particular criminal in digital line-up of millions...the best-case scenario is that you won't find the right person one in six times...". That is not nearly good enough for law enforcement and the courts.
"Hello World" by Hannah Fry
https://www.amazon.com/Hello-World-Hannah-Fry/dp/0857525255
[+] [-] vrperson|5 years ago|reply
At some point they even have a human compare the likeness, and the human also concludes it is the same person.
The article even features the sentence "teve Talley is hardly the first person to be arrested for the errors of a forensic evaluation."
And yet people seem to be hellbent to make it about AI.
[+] [-] fencepost|5 years ago|reply
Legal back-and-forth continues as of 12/14/2020
[+] [-] renewiltord|5 years ago|reply
Given that this is the case, the use of AI cannot be accepted by any members of the US justice system. If I had access to a facial recognition system, I would use it as a whittling down tool, to reduce the amount of hay to find the needle in. But they clearly see a straw and conclude that it is a needle.
It's like giving a four year old a handgun. It's just not responsible. They're just not as good at this as I am. So no handgun for you.
It's a pity. Ideally you should be able to discriminate, and intelligent police should be given the tool, and stupid police not. However, because stupid police and smart police exist in an emulsion, it is better to just waste the time of the smart police than to give the stupid police too powerful a tool.
[+] [-] MeinBlutIstBlau|5 years ago|reply
It's almost as if you stop cops from just locking people up it creates an immense snowball effect on the whole system...
[+] [-] tolbish|5 years ago|reply
It's like giving a four year old a handgun. It's just not responsible. They're just not as good at this as I am. So no handgun for you.
Just wanted to say I've never heard such an argument worded so well. What is the course of action for the common man other than verbalizing such thoughts to local city councils and law enforcement?
[+] [-] eecc|5 years ago|reply
[+] [-] renewiltord|5 years ago|reply
Having lived elsewhere and in the US (a country I practically adore), I suspect it is because Americans have not really had true authoritarianism in any sense at any point in their history. Every time they stray close to the darkness and walk away unscathed and it convinces them that they can never really be consumed by it so they're willing to walk its very edge. Perhaps the UK having a monarch and a state religion is a visible reminder of what a monstrosity the state can be.
It's funny that both nations have populations that support the massive expansion of the state into human affairs: it's just that the Americans want it to enforce behaviour and the British want it to redistribute wealth.
The most amusing manifestation of the American comfort with violence is the difference in how streakers are handled in British football games and American football games. The former have overweight stewards lumber after naked people, failing repeatedly to apprehend, eventually accompanying them off the field. The latter have large, muscular guards catch up to and violently tackle the streaker to the ground.
[+] [-] 11thEarlOfMar|5 years ago|reply
Seems like there should be a discussion about all of this in the article.
Facial recognition as the sole evidence must not be used to make arrests.
[+] [-] wvenable|5 years ago|reply
[+] [-] 99_00|5 years ago|reply
Blaming the technology excuses these awful, incompetent police who will just screw up in a low tech way.
Once the computer flagged a suspect a basic investigation should have ruled out or confirmed the suspect.
[+] [-] grumio|5 years ago|reply
That can't be solved with a software update.
[+] [-] MeinBlutIstBlau|5 years ago|reply
[+] [-] ogre_codes|5 years ago|reply
It's worth pointing out that the technology isn't capable of what people think it is capable of.
[+] [-] gengelbro|5 years ago|reply
Seems like we'd need to know the ambient rate to tell if we're really regressing.
[+] [-] asymptotically4|5 years ago|reply
[+] [-] nelsonenzo|5 years ago|reply
[+] [-] _underfl0w_|5 years ago|reply
I'm absolutely NOT advocating for its use at all, but it seems like that might be a good source for well-tagged, organized data for what is turning out to be a very problematic (though apparently allowed) use case.
From a photography standpoint, it seems to follow that identical lighting in the same room where they're taking everyone's mugshots could produce different levels of contrast for facial features across different skin tones, simply as an artifact of cheap lenses and a lack of white-balancing or overall care, really. If persons A and B have different skin tones, then the routine, careless, terrible, one-size-fits-all black-and-white photo by an underpaid government office worker may not accurately capture the contrast in the shadows of both of their facial features to the same degree, no futile human bias required.
This lack of definition may then further help promote any existing biases in ML training, enforcement, etc. towards people whose features aren't as well contrasted in the resultant, awful photos. Perhaps training on such a grainy, washed-out dataset would at least help the ML distinguish smaller variances in contrast to a finer degree, if nothing else. Us humans can do it, after all.
[+] [-] Cactus2018|5 years ago|reply
https://www.theverge.com/2020/2/6/21126063/facebook-clearvie...
* "Facebook and LinkedIn are latest to demand Clearview stop scraping images for facial recognition tech. Twitter and YouTube have also objected."
[+] [-] irateswami|5 years ago|reply
[+] [-] lb1lf|5 years ago|reply
Or would this be fair use?
[+] [-] mikestew|5 years ago|reply
All I know is that I'm highly skeptical that this white guy would suffer the same fate, even if I were to have a similar criminal record. Facial recognition seems to be the polygraph for a new century. But unlike a polygraph, it mostly false positives only for dark-skinned people. While I'm not quite ready to allow my cynicism to let me think it is an intentional feature, I question how many are demanding a fix.
[+] [-] AmVess|5 years ago|reply
[+] [-] cgriswald|5 years ago|reply
It is not clear to me that a prosecutor's office which has demonstrated that it is not interested in justice, would have cared about the color of skin in the beginning. They're after low-hanging fruit and maybe establishing facial recognition as 'proof enough'. A poor, white ex-con who can't afford a lawyer might just take a plea deal. I think the point of possible divergence is when the guy lawyer-ed up.
[+] [-] throwaway2245|5 years ago|reply
Hanlon's razor is hinted at in the later discussion, i.e.
"never attribute to malice that which is adequately explained by incompetence" (at least, both the words incompetence and malice appear under discussion)
But, I prefer to apply a different rule-of-thumb:
"incompetence is strongly equivalent to malice"
[+] [-] treeman79|5 years ago|reply
https://nypost.com/2017/12/21/chinese-users-claim-iphone-x-f...
[+] [-] Scoundreller|5 years ago|reply
Dunno what the technical reason is, but if there was one, I bet it went something like this:
v1: sales 'engineer' says: "Our FR product works great on light skin but terribly on darker skin so we've largely disabled that" and made zero sales.
On v2 (version2 or vendor2), they just dialed up the acceptable false positive rate, said "ya, it works on everyone, no problem!" and sold sold sold!
[+] [-] gwern|5 years ago|reply
[+] [-] truthwhisperer|5 years ago|reply
[deleted]
[+] [-] StreamBright|5 years ago|reply
[+] [-] _underfl0w_|5 years ago|reply
Not ethical, but also not exactly uncommon.
[+] [-] fmajid|5 years ago|reply
[+] [-] crooked-v|5 years ago|reply
The corresponding XKCD comic: https://xkcd.com/2030/
[+] [-] drak0n1c|5 years ago|reply
Such an automated system at first glance may seem like it reduces on-the-ground human flaws and bias, but it actually enables far deeper corruption. Those with administrative control would be able to selectively highlight or concoct any sort of offense to undermine their personal or political opponents.
"Show me the man and I'll show you the crime" - Lavrentiy Beria, Stalin's Chief of Secret Police
[+] [-] contingencies|5 years ago|reply
Maybe in the short term we will see a push-back of all-weather snow-busting bicycle-riding mesh networkers in Berlin or something, but I think the global trajectory is clear. What we really need is an alternative to commercial cellular.
[+] [-] agnosticmantis|5 years ago|reply
[+] [-] KittenInABox|5 years ago|reply
[+] [-] LanceH|5 years ago|reply
This system is saying, "of the 10 million people we have in our database, this one looks like the photo you gave us."
It is the police and prosecutor who are asserting, "this person did it and should be held in jail."
This tool is a visual equivalent of looking up someone's name and is the visual equivalent of the no-fly-list being used as though no two people share the same name.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] duxup|5 years ago|reply
How do you question your accuser?
Do you have access to the software? How much access?
Do many people even have the resources to examine the software?
[+] [-] banjomet|5 years ago|reply
So how exactly did they get access to all photos? It must have been trained on public networks like instagram, not facebook, right?!? If it was FB that would be interesting.
[+] [-] Cactus2018|5 years ago|reply
> "Facebook and LinkedIn are latest to demand Clearview stop scraping images for facial recognition tech. Twitter and YouTube have also objected."
https://slate.com/technology/2020/02/youtube-linkedin-and-ot...
> "Facebook notably has not sent a formal cease-and-desist letter but claims to have sent other letters to Clearview to request more detail on its practices and then eventually “demanded” that it stop scraping user data. Peter Thiel, a venture capitalist and notable surveillance enthusiast who sits on Facebook’s board of directors, invested $200,000 in Clearview’s first round of funding."
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] upofadown|5 years ago|reply
That's weird. You would think a facial recognition system would return hundreds of matches in a case where it was searching through a whole lot of images. What does "high profile" mean in this case?
[+] [-] option|5 years ago|reply
Of course, no AI system (facial recognition or not) should be a sole decision maker for even detaining anyone.