My local PD wants to do this. The need the City Commission to pass a resolution allowing them to sign this agreement.
I spoke with one of my commissioners who then had a meeting with the Police Department. The end result was the Police removed the item from the Commission meeting because they needed more time to prepare, justify and lay out policies in its use.
Had I not reached out this likely would have been rubber stamped.
Facial recognition technology has all sorts of situations where one can imagine where it would be useful. Unfortunately all of the ways you could use it to suppress, discriminate, silence, and intimidate people outweigh any upsides by miles. Facial recognition tech should be outlawed, unequivocally, plain and simple. I’m not holding my breath that will happen.
I have a hard time getting too upset about this as described in the article. Sharing police fingerprints and mug shots doesn’t sound too bad.
There are of course “but”s:
- should mug shots be retained for people later freed/not convicted? Certainly. It he case today for photos, prints, and (in California) DNA.
- expanding into the DL database, as mentioned in the article seems like a dangerous scope creep — though what if they are investigating a driving offense like a hit and run?
Anyway, a useful article. But not necessarily a bad thing, for a change.
So let's add a face recognition database. Now, two possibilities.
One is they've got the DMV database or some other mass database with photos of millions of people. They run your video through and it comes back with 117 matches. They all look like your guy, because that's what facial recognition is, so which one is it? Still no way to tell. And that's assuming the perpetrator was actually in the database.
The other possibility is that they're only using something like mugshot photos. Now it's only thousands of people in the database, so they only get one match. Hurray, we've caught him! Except that it could still have been any of the other 116 people who weren't in the smaller database. Or any of thousands of people from out of state or other countries who aren't even in the bigger database. So now we're going to go convict this guy because he's the only suspect and we've got a video of somebody who looks like him, even though he probably wasn't even the perpetrator.
Notice that this is only a problem for justice systems that want to be justice systems. If all you want is a pound of flesh from some random schmoe to demonstrate that you're catching bad guys even though you're not, it works great. Same if you're an authoritarian dictatorship, because then you can murder all 117 matches and to heck with all this hard work of making sure we found the right person. Which is why mass surveillance does more harm than good.
To me going around with some photos of the guy and looking for him, maybe asking folks around the neighborhood if they know him, searching through mugshots of known criminals in the area for a possible match, maybe posting his picture on their website or on posters seems like exactly the sort of thing police should be doing. Good old fashioned honest police work. I know they have quotas to meet and they could bring in more revenue by ignoring it and just ticketing people for minor shit like traffic offenses but it just seems lazy to say that because a computer can't do all the work for them they have no way to ID someone. Frankly, just by having video you've already done part of their job for them.
You must not own capital then. If you had they would have worked hard to help you. If you had been a big box store showing the footage to the police they would have helped. But as an individual, they are not there for you.
I came to a realization on this subject while visiting Las Vegas: Pragmatically, a maximally high trust society and a maximally low trust society permit the same behavior patterns. In Las Vegas casinos, which are low trust and have high surveillance, one can comfortably leave tens of thousands of dollars or more on a gaming table while one goes to use the bathroom or whatever. In a hypothetical high trust low surveillance environment one could do the same.
The difference is, we know how to engineer for low trust and high surveillance. We don't really know how to engineer for high trust and low surveillance. So our practical choices are to either live in a low trust low surveillance society and having to constrain ourselves as that requires, or to live in a low trust high surveillance society that simulates a much more pleasant high trust society.
> live in a low trust high surveillance society that simulates a much more pleasant high trust society
And in a high surveillance society, whatever the initial conditions, the cost/benefit ratio of cooperation vs. defection is biased towards trust. In other words, the ersatz high trust society would become the real thing, of course only provided the surveillance is trustworthy. For example, China's social credit system might work if it is administered sans corruption. It has to be self-reflexive.
As incredible as it seems, Mrs. Grundy will save us.
> Today, you are not you, you are your data, a persona. And you are somehow responsible for it or anything that casts a similar shadow.
I think it is a similar case here. The main problem won't be that my face is out there. The problem will be that the police will stop you or stop by at your house to "just have a chat" because some algorithm matched your face to some input somewhere with a 90% confidence.
By the way, who owns the photographs and how do the departments have authority to share them with some company? If they can share with with that company, why not make the data public and let everyone have access to it? I'd like to play with the data as well.
Do you think there's more chance of this happening, versus Barbara down the shops being shown a photograph and saying "oh yes, that looks just like mcny from two doors down"?
Question for people that are anti facial recognition. What methods of identifying people are you ok with? Are you ok with the police/news asking other people to help identify someone? To me that sounds like facial recognition with extra steps?
The arguments against facial recognition like that there can be false positives, or that can affect some groups more than others, doesn't that also apply when people are identifying people? If so isn't the real solution to require more evidence than just a facial match, not to ban an effective way of narrowing a suspect pool. That way police can spend less time manually identifying people and more time getting other evidence.
I’m concerned because probability is not intuitive.
Suppose a store is robbed, and there’s a video.
The police identity some suspects - the guy who just got out of jail for robbing the same store, and another person the store owner had a dispute with. Neither of them look like the robber in the video. Then the police take a still from the video and knock on some doors around the block. Somebody recognizes the person in the video, and the police investigate that person. This scenario seems pretty fair to me.
Now suppose the police run it through the facial recognition system. It identifies one person as a 99% match, and the police go investigate this person. This scenario does not seem so fair to me.
Here’s how I see the math:
P(A) = P(robber has a doppelgänger living on the same block) = .01
P(B) = P(robber had a doppelgänger somewhere in the database) = .9
P(X) = P(police screw up investigation, and will convict the suspect whether or not they are guilty) = .2
P(AX) = .002
P(BX) = .18
The exact numbers are made up, but as long as P(A) << P(B), you can see you this tech will result in a huge increase in false convictions. Even if P(X) is low, the number of false convictions increases by P(B)/P(A).
There's some accountability in the human-based approach.
A person understands the context in which they are naming another person based on their face and can weigh internally whether they are sure enough to tell the police. The algorithm just spits out a confidence for the Police/DA to do whatever they want with.
A person can be cross-examined in court. Is that little old lady a blind racist who is accusing a person of color? Her credibility can be attacked in court. The algorithm was trained by racists? That's a LOT harder to attack.
I don't think the police should be barred from using the tech, but I certainly don't think it should be admissible in court.
The danger I see in it, is that everybody in the database is automatically considered a suspect for any crime in which the database is used.
If the police have to ask people to perform identification, then they have to follow some sort of investigative procedure to arrive at who they ask to perform identification, and who they present to be identified. Of course this has its own problems, but facial recognition databases really ramp those problems up.
Then if you get falsely identified, you’re gonna have to prove it. How much time do you spend every day, not committing crimes, but without an alibi to prove it?
This is something I think is justifiable in time-sensitive cases like tracking down a kidnapper, but I fear there's no way to grant such power and expect its use to always remain narrow. In my opinion, most issues in our country come from the compounding of lots of well-intentioned systems built up over several centuries that no longer even remotely serve their original purposes.
That's how every infringement of our civil liberties has come into existence: It has one or two very limited justifiable cases, and everyone goes 100% all in on that and focuses on that, but not what we will lose. Everyone always dismisses it as a slippery slope fallacy but here we are, with our rights against illegal search and seizures almost completely chipped away. Asking an officer for a search warrant is almost a formality at this point. Cops routinely exercise civil asset forfeiture and make it impossible to regain the goods lost, the 3 letter agencies are downloading and analyzing anything we type online and building comprehensive profiles and behavior models and running our actions online against said models, etc, etc.
This is why you want to set legal boundaries at the outset, rather than let the authorities push the envelope and allow bad practices to become the norm.
I think most people would be fine with using facial recognition technology in the instance of Amber Alerts or an escaped dangerous fugitive.
Likewise, I think most reasonable people would not feel comfortable in a situation where software is bulk scanning the faces of people entering a stadium and checking them for outstanding warrants for unpaid parking tickets etc.
A technologically sophisticated and ruthlessly efficient justice system in a progressive jurisdiction is far more oppressive than a backward, ineffective one under an authoritarian regime.
The cat's out of the bag with this technology - it exists, citizens will demand that the police use it to help solve crimes. The way forward is to ensure that it gets used as an investigative tool, rather than part of mass surveillance.
The way to do that is to make sure that querying a facial recognition database is too expensive for ubiquitous use. A court order per face you want to identify is likely enough, though I'd like an additional $100-$1000 fee to discourage rubber-stamping.
IIRC, the police have hired people who are just really good at recognizing faces and given them a giant stack of photographs of criminals. Driving down the difficulty of that sort of process - identifying suspects with known faces - seems actually reasonable. Driving down the cost of identifying faces to the point where you can have a historical database of identity-annotated sidewalk footage of an entire city is the sort of thing we need to fight.
>IIRC, the police have hired people who are just really good at recognizing faces and given them a giant stack of photographs of criminals. Driving down the difficulty of that sort of process - identifying suspects with known faces
It's believed in some circles this at least sometimes just BS and is a cover for parallel construction.
I often criticize SF for being overly zealous around weird issues...but their ban on facial recognition is looking more prescient by the day. I hope other cities follow suit.
Also, non sequitur, have there been any proposals for how to detect when your country has become an authoritarian dystopia?
These are essentially 'super-technologies', much like government-requested back doors and master keys. It's not an overstatement to say that the potential they hold is on par with human genetic engineering - something the civilized world has thus far agreed is a road best avoided. Sure, the ethical dilemma of them is enough to give pause - but are the following not much greater risks?
* These systems will crowdsource "evidence", and by proxy, accountability - what happens when a mistake is made?
* Increased centralization will eventually enable 'single point of failure' scenarios in systems of a national/international scale.
* What happens when someone misuses or gains unauthorized access to such systems? Will the damage be reversible?
Such technologies will be deployed before accountability for them is defined, leading to a temporary absence thereof. Systems of the past (digital or otherwise) were inherently tolerant to misuse, malfunctions & misfires because only so much damage could be done before the errant behavior was discovered & handled. Today, how many seconds would it take to plunge a country into chaos?
Good. This means police will be more efficient in identifying and catching suspects. Facial recognition is always used as a first-pass filter, so the false positive rate is not relevant - it simply narrows the search space so that limited police resources are used efficiently.
That is essentially the problem to those of us who oppose it.
When you say "criminals" you are probably thinking of strictly people who harm others, but the state is thinking of anybody who breaks its laws. Currently many of those laws exist solely to keep certain groups in power, and there is no guarantee it won't get even worse in the future.
Human law enforcement ensures layers of decisionmakers who are at least theoretically capable of empathy, and limited manpower makes them prioritize the worst or most flagrant crimes. Automated law enforcement gives a smaller group of people horrifyingly granular levels of control over all of society.
[+] [-] landcoctos|6 years ago|reply
I spoke with one of my commissioners who then had a meeting with the Police Department. The end result was the Police removed the item from the Commission meeting because they needed more time to prepare, justify and lay out policies in its use.
Had I not reached out this likely would have been rubber stamped.
[+] [-] landcoctos|6 years ago|reply
Thus, I would at a minimum like to see a clear policy for when it can be used, image retention (if any), image sharing, etc.
[+] [-] kbos87|6 years ago|reply
[+] [-] SN76477|6 years ago|reply
[+] [-] gumby|6 years ago|reply
There are of course “but”s: - should mug shots be retained for people later freed/not convicted? Certainly. It he case today for photos, prints, and (in California) DNA.
- expanding into the DL database, as mentioned in the article seems like a dangerous scope creep — though what if they are investigating a driving offense like a hit and run?
Anyway, a useful article. But not necessarily a bad thing, for a change.
[+] [-] rhacker|6 years ago|reply
Me to cop: That's him you can see him breaking into the car here at 10:15, then at 12:05 we have a perfect clear view of his face.
Cop: We can't really do anything with this, I mean we can take the copy of the video but we don't have any way to identify him.
Me: So we can't really do anything?
Cop: Yep sorry
[+] [-] AnthonyMouse|6 years ago|reply
> Cop: Yep sorry
So let's add a face recognition database. Now, two possibilities.
One is they've got the DMV database or some other mass database with photos of millions of people. They run your video through and it comes back with 117 matches. They all look like your guy, because that's what facial recognition is, so which one is it? Still no way to tell. And that's assuming the perpetrator was actually in the database.
The other possibility is that they're only using something like mugshot photos. Now it's only thousands of people in the database, so they only get one match. Hurray, we've caught him! Except that it could still have been any of the other 116 people who weren't in the smaller database. Or any of thousands of people from out of state or other countries who aren't even in the bigger database. So now we're going to go convict this guy because he's the only suspect and we've got a video of somebody who looks like him, even though he probably wasn't even the perpetrator.
Notice that this is only a problem for justice systems that want to be justice systems. If all you want is a pound of flesh from some random schmoe to demonstrate that you're catching bad guys even though you're not, it works great. Same if you're an authoritarian dictatorship, because then you can murder all 117 matches and to heck with all this hard work of making sure we found the right person. Which is why mass surveillance does more harm than good.
[+] [-] SN76477|6 years ago|reply
I would not want to be in your situation, but the needs of the many outweighs the needs of the few.
[+] [-] autoexec|6 years ago|reply
[+] [-] vinbreau|6 years ago|reply
[+] [-] justinclift|6 years ago|reply
[+] [-] User23|6 years ago|reply
The difference is, we know how to engineer for low trust and high surveillance. We don't really know how to engineer for high trust and low surveillance. So our practical choices are to either live in a low trust low surveillance society and having to constrain ourselves as that requires, or to live in a low trust high surveillance society that simulates a much more pleasant high trust society.
[+] [-] carapace|6 years ago|reply
And in a high surveillance society, whatever the initial conditions, the cost/benefit ratio of cooperation vs. defection is biased towards trust. In other words, the ersatz high trust society would become the real thing, of course only provided the surveillance is trustworthy. For example, China's social credit system might work if it is administered sans corruption. It has to be self-reflexive.
As incredible as it seems, Mrs. Grundy will save us.
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] mcny|6 years ago|reply
https://news.ycombinator.com/item?id=20563430
> Today, you are not you, you are your data, a persona. And you are somehow responsible for it or anything that casts a similar shadow.
I think it is a similar case here. The main problem won't be that my face is out there. The problem will be that the police will stop you or stop by at your house to "just have a chat" because some algorithm matched your face to some input somewhere with a 90% confidence.
By the way, who owns the photographs and how do the departments have authority to share them with some company? If they can share with with that company, why not make the data public and let everyone have access to it? I'd like to play with the data as well.
[+] [-] johnday|6 years ago|reply
[+] [-] justchilly|6 years ago|reply
The arguments against facial recognition like that there can be false positives, or that can affect some groups more than others, doesn't that also apply when people are identifying people? If so isn't the real solution to require more evidence than just a facial match, not to ban an effective way of narrowing a suspect pool. That way police can spend less time manually identifying people and more time getting other evidence.
[+] [-] sjaknanxnnx|6 years ago|reply
Suppose a store is robbed, and there’s a video.
The police identity some suspects - the guy who just got out of jail for robbing the same store, and another person the store owner had a dispute with. Neither of them look like the robber in the video. Then the police take a still from the video and knock on some doors around the block. Somebody recognizes the person in the video, and the police investigate that person. This scenario seems pretty fair to me.
Now suppose the police run it through the facial recognition system. It identifies one person as a 99% match, and the police go investigate this person. This scenario does not seem so fair to me.
Here’s how I see the math:
P(A) = P(robber has a doppelgänger living on the same block) = .01
P(B) = P(robber had a doppelgänger somewhere in the database) = .9
P(X) = P(police screw up investigation, and will convict the suspect whether or not they are guilty) = .2
P(AX) = .002
P(BX) = .18
The exact numbers are made up, but as long as P(A) << P(B), you can see you this tech will result in a huge increase in false convictions. Even if P(X) is low, the number of false convictions increases by P(B)/P(A).
[+] [-] henryfjordan|6 years ago|reply
A person understands the context in which they are naming another person based on their face and can weigh internally whether they are sure enough to tell the police. The algorithm just spits out a confidence for the Police/DA to do whatever they want with.
A person can be cross-examined in court. Is that little old lady a blind racist who is accusing a person of color? Her credibility can be attacked in court. The algorithm was trained by racists? That's a LOT harder to attack.
I don't think the police should be barred from using the tech, but I certainly don't think it should be admissible in court.
[+] [-] AmericanChopper|6 years ago|reply
If the police have to ask people to perform identification, then they have to follow some sort of investigative procedure to arrive at who they ask to perform identification, and who they present to be identified. Of course this has its own problems, but facial recognition databases really ramp those problems up.
Then if you get falsely identified, you’re gonna have to prove it. How much time do you spend every day, not committing crimes, but without an alibi to prove it?
[+] [-] alfromspace|6 years ago|reply
[+] [-] icxa|6 years ago|reply
[+] [-] bparsons|6 years ago|reply
I think most people would be fine with using facial recognition technology in the instance of Amber Alerts or an escaped dangerous fugitive.
Likewise, I think most reasonable people would not feel comfortable in a situation where software is bulk scanning the faces of people entering a stadium and checking them for outstanding warrants for unpaid parking tickets etc.
A technologically sophisticated and ruthlessly efficient justice system in a progressive jurisdiction is far more oppressive than a backward, ineffective one under an authoritarian regime.
[+] [-] ThrustVectoring|6 years ago|reply
The way to do that is to make sure that querying a facial recognition database is too expensive for ubiquitous use. A court order per face you want to identify is likely enough, though I'd like an additional $100-$1000 fee to discourage rubber-stamping.
IIRC, the police have hired people who are just really good at recognizing faces and given them a giant stack of photographs of criminals. Driving down the difficulty of that sort of process - identifying suspects with known faces - seems actually reasonable. Driving down the cost of identifying faces to the point where you can have a historical database of identity-annotated sidewalk footage of an entire city is the sort of thing we need to fight.
[+] [-] dsfyu404ed|6 years ago|reply
It's believed in some circles this at least sometimes just BS and is a cover for parallel construction.
[+] [-] dannykwells|6 years ago|reply
Also, non sequitur, have there been any proposals for how to detect when your country has become an authoritarian dystopia?
[+] [-] raven105x|6 years ago|reply
* These systems will crowdsource "evidence", and by proxy, accountability - what happens when a mistake is made?
* Increased centralization will eventually enable 'single point of failure' scenarios in systems of a national/international scale.
* What happens when someone misuses or gains unauthorized access to such systems? Will the damage be reversible?
Such technologies will be deployed before accountability for them is defined, leading to a temporary absence thereof. Systems of the past (digital or otherwise) were inherently tolerant to misuse, malfunctions & misfires because only so much damage could be done before the errant behavior was discovered & handled. Today, how many seconds would it take to plunge a country into chaos?
[+] [-] proc0|6 years ago|reply
[+] [-] justchilly|6 years ago|reply
[+] [-] throwawaysea|6 years ago|reply
[+] [-] inetknght|6 years ago|reply
That's quite a claim. Prove it.
> Facial recognition is always used as a first-pass filter, so the false positive rate is not relevant
I don't understand. Are you saying that using a filter negates a false positive? I don't think that's accurate at all.
> it simply narrows the search space so that limited police resources are used efficiently.
What about the rights of someone who's been falsely implicated?
[+] [-] doc_gunthrop|6 years ago|reply
[+] [-] openforce|6 years ago|reply
[+] [-] sadris|6 years ago|reply
[+] [-] homonculus1|6 years ago|reply
When you say "criminals" you are probably thinking of strictly people who harm others, but the state is thinking of anybody who breaks its laws. Currently many of those laws exist solely to keep certain groups in power, and there is no guarantee it won't get even worse in the future.
Human law enforcement ensures layers of decisionmakers who are at least theoretically capable of empathy, and limited manpower makes them prioritize the worst or most flagrant crimes. Automated law enforcement gives a smaller group of people horrifyingly granular levels of control over all of society.
[+] [-] elliekelly|6 years ago|reply
[+] [-] 420codebro|6 years ago|reply
[deleted]