top | item 23628394

Facial Recognition Leads To False Arrest Of Black Man In Detroit

661 points| vermontdevil | 5 years ago |npr.org

279 comments

order
[+] ibudiallo|5 years ago|reply
Here is a part that I personally have to wrestle with:

> "They never even asked him any questions before arresting him. They never asked him if he had an alibi. They never asked if he had a red Cardinals hat. They never asked him where he was that day," said lawyer Phil Mayor with the ACLU of Michigan.

When I was fired by an automated system, no one asked if I had done something wrong. They asked me to leave. If they had just checked his alibi, he would have been cleared. But the machine said it was him, so case closed.

Not too long ago, I wrote a comment here about this [1]:

> The trouble is not that the AI can be wrong, it's that we will rely on its answers to make decisions.

> When the facial recognition software combines your facial expression and your name, while you are walking under the bridge late at night, in an unfamiliar neighborhood, and you are black; your terrorist score is at 52%. A police car is dispatched.

Most of us here can be excited about Facial Recognition technology but still know that it's not something to be deployed in the field. It's by no means ready. We might even consider the moral ethics before building it as a toy.

But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.

[1]:https://news.ycombinator.com/item?id=21339530

[+] zamalek|5 years ago|reply
52% is little better than a coin flip. If you have a million individuals in your city, your confidence should be in the ballpark of 99.9999% (1 individual in 1 million). That has really been my concern with this, the software will report any facial match above 75% confidence. Apart from the fact that it appalling confidence, no cop will pay attention to that percentage; immediately arresting or killing the individual.

Software can kill. This software can kill 50% of black people.

[+] nimbius|5 years ago|reply
This shit right here. This is why I don't stop for the inventory control alarms at department store doorways if they go off. I know I've paid, and little sirens are just a nuisance at this point.

This is why I've never stopped for receipt checks, because it's my receipt, and I've paid. The security theatre is just bad comedy.

Just because the machine says I've done a no no, doesn't mean I can't come back and win a lawsuit later. It doesn't abdicate cops from their jobs. I have a winning complexion, so I'll never enjoy a false positive, but if I do, I'll make sure it bankrupts whatever startup incubator garbage decided to shill a replacement for real law enforcement.

[+] sgt101|5 years ago|reply
Worse - an AI decision puts an obligation on the user to follow it. What do I mean? Well - imagine you are a cop, you get an auto flag to arrest someone and use your discretion to overide it. The person goes on to do something completely different; like they are flagged as a murderer but then go and kill someone DUI. You will be flayed, pilloried. So basically safety first, just do the arrest. The secret is that these systems should not be making calls in this kind of context because they just aren't going to be good enough, it's like cancer diagnosis - the oncologist should have the first say, the machine should be a safety net.
[+] yread|5 years ago|reply
You see it everywhere with AI and other tools. We overly trust them. Even when doctors have a high confidence in their diagnosis, they accept wrong AI-recommended conclusion that contradicts it.

https://www.nature.com/articles/s41591-020-0942-0

Bit like with self driving cars - if it's not perfect we don't know how to integrate it with people

[+] m0zg|5 years ago|reply
> The trouble is not that the AI can be wrong

Exactly what I thought when I've read about this. It's not like humans are great at matching faces either. In fact machines have been better at facial recognition for over a decade now. I bet there are hundreds of people (of all races) in prison right now who are there simply because they were mis-identified by a human. Human memory, even in the absence of bias and prejudice, is pretty fallible.

There is a notion of "mixture of experts" in machine learning. It's when you have two or more models that are not, by themselves, sufficiently good to make a robust prediction, but that make different kinds of mistakes, and you use the consensus estimate. The resulting estimate will be better than any model in isolation. The same should be done here - AI should be merely a signal, it is not a replacement for detective work, and what's described in the article is just bad policing. AI has very little to do with that.

[+] ineedasername|5 years ago|reply
Can you add any details about how an automated system was used to fire you? I'm not familiar with systems like that.
[+] air7|5 years ago|reply
The problem is not with the technology, but with how it's used. A medical test is also not 100% error-proof which is why a professional needs to interpret the results, sometimes conducting other tests or disregarding it completely.

A cop stopping someone that has a resemblance to a criminal for questioning seems like a good thing to me, as long as the cop knows that there's a reasonable chance it's the wrong guy.

[+] 99_00|5 years ago|reply
>But that's not how it is being sold to law enforcement or other entities. It's _Reduce crime in your cities. Catch criminals in ways never thought possible. Catch terrorists before they blow up anything._ It is sold as an ultimate decision maker.

None of those selling points logically lead to the conclusion that it is the ultimate decisions maker.

[+] danso|5 years ago|reply
This story is really alarming because as described, the police ran a face recognition tool based on a frame of grainy security footage and got a positive hit. Does this tool give any indication of a confidence value? Does it return a list (sorted by confidence) of possible suspects, or any other kind of feedback that would indicate even to a layperson how much uncertainty there is?

The issue of face recognition algorithms performing worse on dark faces is a major problem. But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?

[+] strgcmc|5 years ago|reply
I think the NYT article has a little more detail: https://www.nytimes.com/2020/06/24/technology/facial-recogni...

Essentially, an employee of the facial recognition provider forwarded an "investigative lead" for the match they generated (which does have a score associated with it on the provider's side, but it's not clear if the score is clearly communicated to detectives as well), and the detectives then put the photo of this man into a "6 pack" photo line-up, from which a store employee then identified that man as being the suspect.

Everyone involved will probably point fingers at each other, because the provider for example put large heading on their communication that, "this is not probable cause for an arrest, this is only an investigative lead, etc.", while the detectives will say well we got a hit from a line-up, blame the witness, and the witness would probably say well the detectives showed me a line-up and he seemed like the right guy (or maybe as is often the case with line-ups, the detectives can exert a huge amount of bias/influence over witnesses).

EDIT: Just to be clear, none of this is to say that the process worked well or that I condone this. I think the data, the technology, the processes, and the level of understanding on the side of the police are all insufficient, and I do not support how this played out, but I think it is easy enough to provide at least some pseudo-justification at each step along the way.

[+] zaroth|5 years ago|reply
> Does this tool give any indication of a confidence value?

Yes.

> Does it return a list (sorted by confidence) of possible suspects,

Yes.

> ... or any other kind of feedback that would indicate even to a layperson how much uncertainty there is?

Yes it does. It also states in large print heading “THIS DOCUMENT IS NOT A POSITIVE IDENTIFICATION IT IS AN INVESTIGATIVE LEAD AND IS NOT PROBABLE CAUSE TO ARREST”.

You can see a picture of this in the ACLU article.

The police bungled this badly by setting up a fake photo lineup with the loss prevention clerk who submitted the report (who had only ever seen the same footage they had).

However, tools that are rife for misuse do not get a pass because they include a bold disclaimer. If the tool/process can not prevent misuse, the tool/process is broken and possibly dangerous.

That said, we have little data on how often the tool results in catching dangerous criminals versus how often it misidentifies innocent people. We have little data on if those innocent people tend to skew toward a particular demographic.

But I have a fair suspicion that dragnet techniques like this unfortunately can be both effective and also problematic.

[+] Pxtl|5 years ago|reply
Intresting and related, a team made a neat "face depixelizer" that takes a pixelated image and uses machine learning to generate a face that should match the pixelated image.

What's hilarious is that it makes faces that look nothing like the original high-resolution images.

https://twitter.com/Chicken3gg/status/1274314622447820801

[+] caconym_|5 years ago|reply
People are not good at understanding uncertainty and its implications, even if you put it front and center. I used to work in renewable energy consulting and I was shocked by how aggressively uncertainty estimates are ignored by those whose goals they threaten.

In this case, it's incumbent on the software vendors to ensure that less-than-certain results aren't even shown to the user. American police can't generally be trusted to understand nuance and/or do the right thing.

[+] adim86|5 years ago|reply
I blame TV shows like CSI and all the other crap out there that make pixelated images look like something you could "Zoom" into or something the computer can still understand even if the eye does not. Because of this, non tech people do not really understand that pixelated images have LOST information. Add that to the racial situation in the U.S. and the the inaccuracy of the tool for black people. Wow, this can lead to some really troublesome results
[+] throwaway894345|5 years ago|reply
> But the other side of it is: would police be more hesitant to act on such fuzzy evidence if the top match appeared to be a middle-class Caucasian (i.e. someone who is more likely to take legal recourse)?

Honest question: does race predict legal recourse when decoupled from socioeconomic status, or is this an assumption?

[+] bsenftner|5 years ago|reply
> The issue of face recognition algorithms performing worse on dark faces is a major problem.

This needs to be coupled with the truth that people (police) without diverse racial exposure are terrible at identifying people outside of their ethnicity. In the photo/text article they show the top of the "Investigative Lead Report" as an image. You mean to say that every cop who saw the two images side by side did not stop and say "hey, these are not the same person!" They did not, and that's because their own brains' could not see the difference.

This is a major reason police forces need to be ethnically diverse. Just that enables those members of the force who never grew up or spent time outside their ethnicity can learn to tell a diverse range of similar but different people outside their ethnicity apart.

[+] peroporque|5 years ago|reply
It wouldn't make it into the newspapers, so it doesn't matter.
[+] mnw21cam|5 years ago|reply
This is a classic example of the false positive rate fallacy.

Let's say that there are a million people, and the police have photos of 100,000 of them. A crime is committed, and they pull the surveillance of it, and match against their database. They have a funky image matching system that has a false positive rate of 1 in 100,000 people, which is way more accurate than I think facial recognition systems are right now, but let's just roll with it. Of course, on average, this system will produce one positive hit per search. So, the police roll up to that person's home and arrest them.

Then, in court, they get to argue that their system has a 1 in 100,000 false positive rate, so there is a chance of 1 in 100,000 that this person is innocent.

Wrong!

There are ten people in the population of 1 million that the software would comfortably produce a positive hit for. They can't all be the culprit. The chance isn't 1 in 100,000 that the person is innocent - it is in fact at least 9 out of 10 that they are innocent. This person just happens to be the one person out of the ten that would match that had the bad luck to be stored in the police database. Nothing more.

[+] sirsar|5 years ago|reply
See also: Privileging the hypothesis.

If I'm searching for a murderer in a town of 1000, it takes about 10 independent bits of evidence to get the right one. And when I charge someone, I must already have the vast majority of that evidence. To say "oh well we don't know that it wasn't Mr. or Mrs. Doe, let's bring them in" is itself a breach of the Does' rights. I'm ignoring 9 of the 10 bits of evidence!

Using a low-accuracy facial recognition system and a low-accountability lineup procedure to elevate some random man who did nothing wrong from presumed-innocent to 1-in-6 to prime suspect, without having the necessary amount of evidence, is committing the exact same error and is nearly as egregious as pulling a random civilian out of a hat and charging them.

[+] Buttons840|5 years ago|reply
There's a good book called "The Drunkards Walk", that describes a woman who was jailed after having 2 children die from SIDS. They argued that the odds of this happening is 1 in a million (or something like that), so probably the woman is a baby killer. The prosecution had statisticians argue this. The woman was found guilty.

She later won on appeal in part because the defense showed that the testimony and argument of the original statisticians were wrong.

This stuff is so easy to get wrong. A little knowledge of statistics can be dangerous.

[+] x87678r|5 years ago|reply
Definitely they should have everyone's 3d image in the system. DNA too.
[+] ghostpepper|5 years ago|reply
He wasn't arrested until the shop owner had also "identified" him. The cops used a single frame of grainy video to pull his driver's license photo, and then put that photo in a lineup and showed the store clerk.

The store clerk (who hadn't witnessed the crime and was going off the same frame of video fed into the facial recognition software) said the driver's license photo was a match.

There are several problems with the conduct of the police in this story but IMHO the use of facial recognition is not the most egregious.

[+] malwarebytess|5 years ago|reply
The story is the same one that all anti-surveillance, anti-police militarization, pro-privacy, and anti-authoritarian people foretell. Good technology will be used enable, amplify, and justify civil rights abuses by authority figures from your local beat cop, to a faceless corporation, a milquetoast public servant, or the president of the United States.

Our institutions and systems (and maybe humans in general) are not robust enough to cleanly handle these powers, and we are making the same mistake over and over and over again.

[+] businesslucas|5 years ago|reply
It is not clear to me that the person who identified him was shop owner or clerk. From the nyt article: https://www.nytimes.com/2020/06/24/technology/facial-recogni...

"The Shinola shoplifting occurred in October 2018. Katherine Johnston, an investigator at Mackinac Partners, a loss prevention firm, reviewed the store’s surveillance video and sent a copy to the Detroit police"

"In this case, however, according to the Detroit police report, investigators simply included Mr. Williams’s picture in a “6-pack photo lineup” they created and showed to Ms. Johnston, Shinola’s loss-prevention contractor, and she identified him. (Ms. Johnston declined to comment.)"

[+] bsenftner|5 years ago|reply
Yes, this is a story of police misconduct. The regulation of facial recognition that is required is regulation against police/authority stupidity. The FR system aids in throwing away misses, leaving investigative leads. But if a criminal is not in the FR database to begin with, any results of the FR are wastes of time.
[+] js2|5 years ago|reply
> "I picked it up and held it to my face and told him, 'I hope you don't think all Black people look alike,' " Williams said.

I'm white. I grew up around a sea of white faces. Often when watching a movie filled with a cast of non-white faces, I will have trouble distinguishing one actor from another, especially if they are dressed similarly. This sometimes happens in movies with faces similar to the kinds I grew up surrounded by, but less so.

So unfortunately, yes, I probably do have more trouble distinguishing one black face from another vs one white face from another.

This is known as the cross-race effect and it's only something I became aware of in the last 5-10 years.

Add to that the fallibility of human memory, and I can't believe we still even use line ups. Are there any studies about how often line ups identify the wrong person?

https://en.wikipedia.org/wiki/Cross-race_effect

[+] SauciestGNU|5 years ago|reply
I lived in South Africa for a while and heard many times, with various degrees of irony, "you white people all look the same" from black South Africans. So yeah it's definitely a cross-racial recognition problem, and it's probably also a problem with distinguishing between members of visible minorities using traits beyond the most noticable othering characteristic.
[+] Anthony-G|5 years ago|reply
There is just so much wrong with this story. For starters:

The shoplifting incident occurred in October 2018 but it wasn’t until March 2019 that the police uploaded the security camera images to the state image-recognition system but the police still waited until the following January to arrest Williams. Unless there was something special about that date in October, there is no way for anyone to remember what they might have been doing on a particular day 15 months previously. Though, as it turns out, the NPR report states that the police did not even try to ascertain whether or not he had an alibi.

Also, after 15 months, there is virtually no chance that any eye-witness (such as the security guard who picked Williams out of a line-up) would be able to recall what the suspect looked like with any degree of certainty or accuracy.

This WUSF article [1] includes a photo of the actual “Investigative Lead Report” and the original image is far too dark for a anyone (human or algorithm) to recognise the person. It’s possible that the original is better quality and better detail can be discerned by applying image-processing filters – but it still looks like a very noisy source.

That same “Investigative Lead Report” also clearly states that “This document is not a positive identification … and is not probable cause to arrest. Further investigation is needed to develop probable cause of arrest”.

The New York Times article [2] states that this facial recognition technology that the Michigan tax-payer has paid millions of dollars for is known to be biased and that the vendors do “not formally measure the systems’ accuracy or bias”.

Finally, the original NPR article states that

> "Most of the time, people who are arrested using face recognition are not told face recognition was used to arrest them," said Jameson Spivack

[1] https://www.wusf.org/the-computer-got-it-wrong-how-facial-re...

[2] https://www.nytimes.com/2020/06/24/technology/facial-recogni...

[+] jandrewrogers|5 years ago|reply
It isn't just facial recognition, license plate readers can have the same indefensibly Kafka-esque outcomes where no one is held accountable for verifying computer-generated "evidence". Systems like in the article make it so cheap for the government to make a mistake, since there are few consequences, that they simply accept mistakes as a cost of doing business.

Someone I know received vehicular fines from San Francisco on an almost weekly basis solely from license plate reader hits. The documentary evidence sent with the fines clearly showed her car had been misidentified but no one ever bothered to check. She was forced to fight each and every fine because they come with a presumption of guilt, but as soon as she cleared one they would send her a new one. The experience became extremely upsetting for her, the entire bureaucracy simply didn't care.

It took threats of legal action against the city for them to set a flag that apparently causes violations attributed to her car to be manually reviewed. The city itself claimed the system was only 80-90% accurate, but they didn't believe that to be a problem.

[+] vermontdevil|5 years ago|reply
From ACLU article:

Third, Robert’s arrest demonstrates why claims that face recognition isn’t dangerous are far-removed from reality. Law enforcement has claimed that face recognition technology is only used as an investigative lead and not as the sole basis for arrest. But once the technology falsely identified Robert, there was no real investigation.

I fear this is going to be the norm among police investigations.

[+] vmception|5 years ago|reply
> Federal studies have shown that facial-recognition systems misidentify Asian and black people up to 100 times more often than white people.

The idea behind inclusion is that this product would have never made it to production if the engineering teams, product team, executive team and board members represented the population. But enough representation so that there is a countering voice is even better.

Would have just been "this edge case is not an edge case at all, axe it."

Accurately addressing a market is the point of the corporation more than an illusion of meritocracy amongst the employees.

[+] gentleman11|5 years ago|reply
The discussion about this tech revolves around accuracy and racism, but the real threat is in global unlimited surveillance. China is installing 200 million of facial recognition cameras right now to keep the population under control. It might be the death of human freedom as this technology spreads

Edit: one source says it is 400 million new cameras: https://www.cbc.ca/passionateeye/m_features/in-xinjiang-chin...

[+] sneak|5 years ago|reply
Another reason that it's absolutely insane that the state demands to know where you sleep at night in a free society. These clowns were able to just show up at his house and kidnap him.

The practice of disclosing one's residence address to the state (for sale to data brokers[1] and accessible by stalkers and the like) when these kinds of abuses are happening is something that needs to stop. There's absolutely no reason that an ID should be gated on the state knowing your residence. It's none of their business. (It's not on a passport. Why is it on a driver's license?)

[1]: https://www.newsweek.com/dmv-drivers-license-data-database-i...

[+] w_t_payne|5 years ago|reply
Perhaps we, as technologists, are going about this the wrong way. Maybe, instead of trying to reduce the false alarm rate to an arbitrarily low number, we instead develop CFAR (Constant false alarm rate) systems, so that users of the system know that they will get some false alarms, and develop procedures for responding appropriately. In that way, we could get the benefit of the technology, whilst also ensuring that the system as a whole (man and machine together) are designed to be robust and have appropriate checks and balances.
[+] hpoe|5 years ago|reply
I don't think using the facial recognition is necessarily wrong to help identify probable suspects, but arresting someone based on a facial match algorithm is definitely going too far.

Of course really I blame the AI/ML hucksters for part of this mess who have sold us the idea of machines replacing rather than augmenting human decision making.

[+] czbond|5 years ago|reply
A few things I just don't have the stomach for as an engineer, writing software that: - impacts someones health - impacts someones finances - impacts someones freedoms

Call me weak, but I think about the "what ifs" a bit too much in those cases. What if my bug keeps them from selling their stock and they lose their savings? What if the wrong person is arrested, etc?

[+] at_a_remove|5 years ago|reply
I think that your prints, DNA, and so forth must be, in the interests of fairness, utterly erased from all systems in the case of false arrest. With some kind of enormous, ruinous financial penalty in place for the organizations for non-compliance, as well as automatic jail times for involved personnel. These things need teeth to happen.
[+] rusty__|5 years ago|reply
any defence lawyer with more than 3 brain cells would have an absolute field day deconstructing a case brought solely on the basis of a facial recognition. What happened to the idea that police need to gather a variety of evidence confirming their suspicions before an arrest is issued. Even a state prosecutor wouldn't authorize a warrant based on such flimsy methods.
[+] ARandomerDude|5 years ago|reply
True but the defendant is still financially, and in many cases professionally, ruined.
[+] failuser|5 years ago|reply
Getting a layer that would advise beyond pleading guilty for a reduced sentence is not a default option.
[+] jackklika|5 years ago|reply
The company that developed this software is Dataworks plus, according to the article. Name and shame.
[+] FpUser|5 years ago|reply
And then in some states employers are allowed to ask have you eve been arrested (never mind convicted of any crime) on employment application. Sure, keep putting people down. One day it might catch up with China's social scoring policies.