So much attention is always paid to accuracy when facial recognition comes up. Even if it was 100% accurate, it’s a technology that makes mass surveillance too efficient. Prohibitions need to also extend to the private sector, with the exception of, say, facial recognition for personal use (eg FaceID.)
>So much attention is always paid to accuracy when facial recognition comes up.
Because many people are ok with mass-survielince as long as long as it's used fairly. The problem they see with this tech isn't that it's going to create the kind of world we don't want to live in but that it unfairly targets minorities and/or the poors more than other people. They don't understand that it can never be fair because the institutions that run it can never be fair because there will always be out-groups because that's how human nature works and they will get disproportionately screwed unless we limit the ability of the majority to screw them.
Yeah. No more hoping for a cyberpunk future where people survive in the cracks. If surveillance tech grows, we'll get a society with nowhere to hide at all.
I agree that the government shouldn't be able to deploy this tech. We should have some expectation of freedom and pseudonymity in public. However, everyone always forgets that it is trivial to track your (almost) exact physical location at all times with our constant mobile phone use. We don't seem to mind this much at all.
I would really like to see one of those movies now where they identify the unknown guy by photo as a big time terrorist only this time it turns out he's a janitor in the local high school, then it can be one of those feel-good cop - unlikely partner action-comedy flicks, but only halfway through, the first part will be trying to catch the terrorist who keeps getting away from them by doing janitorial work at unexpected junctures.
Sounds a little Amish. I ended up in Amish country when traveling last century, and it was a little weird. But I went to a house museum, and it was interesting.
One of the things that stuck with me as different: They take a little time to look at a new technology a decide if they want to use it. I wonder if our embrace of tech as being "neutral" is correct sometimes.
"They're more cautious — more suspicious — wondering is this going to be helpful or is it going to be detrimental? Is it going to bolster our life together, as a community, or is it going to somehow tear it down?""
"Nothing in (b)(1) shall prohibit Boston or any Boston official from:
a. using evidence relating to the investigation of a specific crime that may have been generated from a face surveillance system, so long as such evidence was not generated by or at the the request of Boston or any Boston official;"
So if a third-party, say the FBI or DEA, provides info from face surveilance systems to Boston with them specifically requesting it, they could use that.
It’s not just about the percentage of false positives, it’s about how much easier it is to generate false positives.
Human face-matching accuracy is worse than software in many scenarios (“is this the man that robbed you?”) but it requires so much effort that the absolute number of false positives are low.
On the other hand, facial recognition hooked up to cctv can passively generate mountains of matches all day long for pennies.
This must just be a ban of the technology for government use, right? Can retailers still use facial recognition as part of their in-store analytics? Can Apple still sell FaceID products in Boston?
I think I fall with the HN majority in my privacy views. Yesterday however I talked to someone who said that they prefer someone is always watching, so that they can feel safer.
Interestingly they also said they don't want to know the specifics of anyone watching.
I wonder if laws like this, that in actuality seem fairly toothless, will result in more of that. "Safety, and ignorance of where/who the watchers are."
If the basis of banning facial recognition technology is its poor accuracy, will facial recognition technology be unbanned if it is 99% accurate for everyone?
Evem 99.999% is not good enough be enough because you will have at least one person that is guaranteed a false arrest and prosecution (which 90%+ prosecuted take a plea deal). There is inaccuracy with other methods as well but you have humans being held accountable. When it comes to justice,mistakes are tolerable so long as adequate compensatiom exists but when the mistake is systemic it becomes intolerable due to the preexisting guarantee of a mistake as opposed to a specific human making an error as a matter of chance. This is all exacerbated by other systemic cruelties of the US justice system where even an arrest and release for no cause means days if not weeks of imprisonment and if charged most people accept a plea bargain deal regardless of actual guilt. It's better to let actual guilty criminals get away than explicitly and systemically accept even one innocent person being punished incorrectly, because among other reasons, the justice system has legitimacy because its goal is to administer justice, accepting any amount of injustice invalidates that legitimacy and authority.
A good analogy would be a chef tolerating fecal matter in their food. Yes, there is always some small chance of that happening, but no one accepts food from s chef that explicitly treats toilet water and claims 99% of the fecal matter is gone and only one in 100 people will get sick from it.
I hope not. I am ok with it as long as we don't plaster the whole public space with video cameras. That would be more than unfortunate. I doubt most places would gain anything from it.
I am not keen of my state knowing my whereabouts either and think camera deployment would create more problems than it solves. Countries where it has been deployed don't have impressive advantages to show for and all the privacy disadvantages. So why should we even consider it? What problem should it solve?
My opinion, if you are still scared of terrorists a psychiatrist might be more useful than a camera.
If you assume the best then poor accuracy is just the easiest justification not necessarily the only, the best, or the most important.
I’ll take it.
There are narrow use cases where i think facial recognition in law enforcement would be a good thing, but it is ultimately too much power and too easily abused to trust legal systems to adhere to right usage. Banning is the correct action when misuse is as bad and correct use as complex as they are.
99% accurate would still lead to a huge number of false positives because most people aren't criminals. The base rate of non-criminals is so much larger than the number of criminals that there would be hundreds of thousands or millions of false positives, depending on how widely facial recognition was deployed. There would be many times more false positives than accurate identifications.
Let me start off by saying that I think we need to be careful with this type of tech. Assuming it never makes mistakes can be deadly.
As someone who has done some work with building deep learning models what is it that makes this unfairly target minorities?
Is it that the people who trained the model did not present enough example images of minorities during training? Is it because darker (presumably black) skin does not show up as well on poor quality videos (presumably because the metering of the camera exposed for the surrounding background which was bright)? Or is it the law enforcement using it was poorly trained and assumed the computer was infallible combined with possible prejudice they already had against minorities?
The first problem I would think could be easily solved. The second problem I would think would be rather difficult. The last would require extensive training but I am sure we humans would screw that up also.
I think you’re starting from the wrong place in your analysis. The first question we should be asking is why we would want this technology at all. The potential for bias is a moot point if we as a society decide that we don’t want this kind of government surveillance.
Even if the systems were perfectly fair and not the least bit biased and were operated by a perfect utopian police force I still wouldn’t want facial recognition. I’ve yet to hear a potential benefit of this sort of software that would justify the huge cost to citizen privacy.
Just because we can train computers to recognize faces doesn’t mean that we should.
For those unfamiliar with Boston governance, "Boston" here means the "City of Boston", population 0.7 Mppl. A Seattle or El Paso. Rather than a "Greater Boston" aggregation of municipalities of 2 to 8 Mppl.
For a NYC analogy, imagine its historical consolidation was more limited, and many of its towns and cities remain independent, never having consolidated into boroughs, and the boroughs into one big city. Flushing, Brooklyn Heights, Kingsbridge, are still independent towns. Here, the city council of a city occupying only lower Manhattan, but confusingly named "City of New York", just voted on face recognition.
Interesting. So the city's tying its own hands. I assume private companies can still use their own resources to do their own individual identification, though.
The stated complaint against the article is accuracy. It also says Boston PD doesn’t use it yet - this is preemptive. I wonder if they are aware that tests conducted by the ACLU and the like didn’t use the recommended configurations for precision. Not to mention that false positives don’t matter as long as there’s a human in the loop to validate the match, because then it is no worse than the minor risk of a false match we accept even without facial recognition.
False positive matter. Being arrested for something you didn't do is horrific and life changing. The human in the loop is human. We unconsciouly tend to trust machines as objective and accurate. This is not as simple as you make it to be.
>I wonder if they are aware that tests conducted by the ACLU and the like didn’t use the recommended configurations for precision
As I mentioned in other similar threads I also wonder if police is using the recommended configs and not the default ones, because it can't help if somewhere in the cos it shows with small letters that "we recommend for police to use X but everyone else can use Y so we default to Y so if you are the police we recommend you to change Y to X".
I would be pretty angry about a false suspicion. It is a constant additional life risk without any practical advantages. Sure, maybe I get to be victim of some crime and then be glad a video exists. Be that as it may, it isn't worth it to me.
Is the American justice system so broken and law enforcement so incompetent that a computer flagging someone is basically an automatic conviction and the technology has to be banned?
No evidence, by its self should be enough to get a conviction. Not your finger prints all over the scene of a crime, not a video of someone who looks like you, not even your DNA matching a rape kit.
The fact that someone was arrested because of match shows a failure in basic criminal investigation more than anything else.
It's because it can track the where abouts of everyone. It's a massive privacy issue. Do you want the gov't to know where you go, what you spend your money, and who you associate with? I sure as hell don't even if they wouldn't find problem with it.
It really doesn't matter what the legislation says - no doubt they have already been doing and been calling it something else. It's going to be one of the worst things to come out of the 21st century.
Yes, it is that broken. First, simply being charged means you're automatically out $5k for a criminal defense attorney and missed employment. Second, the criminal justice system appears to have little respect for the fallibility of technology, especially when its results are highly correlated with other evidence (eg a police lineup).
There's at least one prominent case of someone being arrested solely on a face-recognition match https://news.ycombinator.com/item?id=23628394 and there are at least a few cases where someone was convicted due to mistaken identity, let me just pick one https://www.nbcnews.com/news/us-news/kansas-man-who-blamed-w... So I wouldn't call it "automatic" but since there is a false-positive rate in convictions, it's important to try to reduce false positives at earlier steps as well.
Seems dumb, facial recognition has a lot of potential. I love entering UK with facial recognition and passport to go straight through the border, no lines no talking.
[+] [-] kbos87|5 years ago|reply
[+] [-] throwaway0a5e|5 years ago|reply
Because many people are ok with mass-survielince as long as long as it's used fairly. The problem they see with this tech isn't that it's going to create the kind of world we don't want to live in but that it unfairly targets minorities and/or the poors more than other people. They don't understand that it can never be fair because the institutions that run it can never be fair because there will always be out-groups because that's how human nature works and they will get disproportionately screwed unless we limit the ability of the majority to screw them.
[+] [-] cousin_it|5 years ago|reply
[+] [-] snarf21|5 years ago|reply
[+] [-] bryanrasmussen|5 years ago|reply
[+] [-] deeblering4|5 years ago|reply
[+] [-] gus_massa|5 years ago|reply
"Presumed Innocent" https://www.imdb.com/title/tt0100404/
"No way out" https://www.imdb.com/title/tt0093640/
[+] [-] jp_sc|5 years ago|reply
Bravo!
[+] [-] acomjean|5 years ago|reply
One of the things that stuck with me as different: They take a little time to look at a new technology a decide if they want to use it. I wonder if our embrace of tech as being "neutral" is correct sometimes.
"They're more cautious — more suspicious — wondering is this going to be helpful or is it going to be detrimental? Is it going to bolster our life together, as a community, or is it going to somehow tear it down?""
https://www.npr.org/sections/alltechconsidered/2013/09/02/21...
Of course predicting where things end up when new tech disrupts is difficult.
[+] [-] RcouF1uZ4gsC|5 years ago|reply
From the ban: https://assets.documentcloud.org/documents/6956465/Boston-Ci...
"Nothing in (b)(1) shall prohibit Boston or any Boston official from:
a. using evidence relating to the investigation of a specific crime that may have been generated from a face surveillance system, so long as such evidence was not generated by or at the the request of Boston or any Boston official;"
So if a third-party, say the FBI or DEA, provides info from face surveilance systems to Boston with them specifically requesting it, they could use that.
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] lvs|5 years ago|reply
[+] [-] kgin|5 years ago|reply
Human face-matching accuracy is worse than software in many scenarios (“is this the man that robbed you?”) but it requires so much effort that the absolute number of false positives are low.
On the other hand, facial recognition hooked up to cctv can passively generate mountains of matches all day long for pennies.
[+] [-] criddell|5 years ago|reply
[+] [-] zatel|5 years ago|reply
Interestingly they also said they don't want to know the specifics of anyone watching.
I wonder if laws like this, that in actuality seem fairly toothless, will result in more of that. "Safety, and ignorance of where/who the watchers are."
[+] [-] an_opabinia|5 years ago|reply
[+] [-] badrabbit|5 years ago|reply
A good analogy would be a chef tolerating fecal matter in their food. Yes, there is always some small chance of that happening, but no one accepts food from s chef that explicitly treats toilet water and claims 99% of the fecal matter is gone and only one in 100 people will get sick from it.
[+] [-] raverbashing|5 years ago|reply
Think 99.99% and even then
99% accuracy in a city setting means (more or less) that you'll get the wrong person 1 in 100 times.
If you're scanning a crowd with this, you can imagine what can happen
[+] [-] raxxorrax|5 years ago|reply
I am not keen of my state knowing my whereabouts either and think camera deployment would create more problems than it solves. Countries where it has been deployed don't have impressive advantages to show for and all the privacy disadvantages. So why should we even consider it? What problem should it solve?
My opinion, if you are still scared of terrorists a psychiatrist might be more useful than a camera.
[+] [-] colechristensen|5 years ago|reply
I’ll take it.
There are narrow use cases where i think facial recognition in law enforcement would be a good thing, but it is ultimately too much power and too easily abused to trust legal systems to adhere to right usage. Banning is the correct action when misuse is as bad and correct use as complex as they are.
[+] [-] Veen|5 years ago|reply
[+] [-] Jonanin|5 years ago|reply
[+] [-] stx|5 years ago|reply
As someone who has done some work with building deep learning models what is it that makes this unfairly target minorities?
Is it that the people who trained the model did not present enough example images of minorities during training? Is it because darker (presumably black) skin does not show up as well on poor quality videos (presumably because the metering of the camera exposed for the surrounding background which was bright)? Or is it the law enforcement using it was poorly trained and assumed the computer was infallible combined with possible prejudice they already had against minorities?
The first problem I would think could be easily solved. The second problem I would think would be rather difficult. The last would require extensive training but I am sure we humans would screw that up also.
[+] [-] elliekelly|5 years ago|reply
Even if the systems were perfectly fair and not the least bit biased and were operated by a perfect utopian police force I still wouldn’t want facial recognition. I’ve yet to hear a potential benefit of this sort of software that would justify the huge cost to citizen privacy.
Just because we can train computers to recognize faces doesn’t mean that we should.
[+] [-] mncharity|5 years ago|reply
For a NYC analogy, imagine its historical consolidation was more limited, and many of its towns and cities remain independent, never having consolidated into boroughs, and the boroughs into one big city. Flushing, Brooklyn Heights, Kingsbridge, are still independent towns. Here, the city council of a city occupying only lower Manhattan, but confusingly named "City of New York", just voted on face recognition.
[+] [-] Sandvich|5 years ago|reply
[+] [-] AbrahamParangi|5 years ago|reply
[+] [-] shadowgovt|5 years ago|reply
[+] [-] x87678r|5 years ago|reply
[+] [-] throwawaysea|5 years ago|reply
[+] [-] Moru|5 years ago|reply
[0] https://en.wikipedia.org/wiki/Polygraph
[+] [-] mola|5 years ago|reply
[+] [-] simion314|5 years ago|reply
As I mentioned in other similar threads I also wonder if police is using the recommended configs and not the default ones, because it can't help if somewhere in the cos it shows with small letters that "we recommend for police to use X but everyone else can use Y so we default to Y so if you are the police we recommend you to change Y to X".
[+] [-] raxxorrax|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] EmilioMartinez|5 years ago|reply
[+] [-] 99_00|5 years ago|reply
No evidence, by its self should be enough to get a conviction. Not your finger prints all over the scene of a crime, not a video of someone who looks like you, not even your DNA matching a rape kit.
The fact that someone was arrested because of match shows a failure in basic criminal investigation more than anything else.
[+] [-] wpdev_63|5 years ago|reply
It really doesn't matter what the legislation says - no doubt they have already been doing and been calling it something else. It's going to be one of the worst things to come out of the 21st century.
[+] [-] mindslight|5 years ago|reply
[+] [-] hevelvarik|5 years ago|reply
[+] [-] sp332|5 years ago|reply
[+] [-] dragonwriter|5 years ago|reply
Structurally—not necessarily individually, though often that too—malevolent, not incompetent, at least as is most relevant to the issue at hand.
[+] [-] mazug|5 years ago|reply
[+] [-] mtgx|5 years ago|reply
[deleted]
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] golemiprague|5 years ago|reply
[deleted]
[+] [-] thatoneuser|5 years ago|reply
[deleted]
[+] [-] Pinegulf|5 years ago|reply
Umm... I'd like to see technology, which passes this test.
Relevant obligatory xkcd: https://xkcd.com/2030/
[+] [-] x87678r|5 years ago|reply
[+] [-] eranima|5 years ago|reply