top | item 22492671

Before Clearview Became a Police Tool, It Was a Secret Plaything of the Rich

267 points| pseudolus | 6 years ago |nytimes.com | reply

139 comments

order
[+] DennisP|6 years ago|reply
Clearview now says the app is "available only for law enforcement agencies and select security professionals to use as an investigative tool."

I'm fairly convinced by David Brin's argument that this is exactly the wrong approach. In The Transparent Society he argues that privacy is no longer an option; our choice is between a society where the police surveil us all, and one where we all surveil each other. He says only the latter is compatible with freedom. We have to be able to monitor the cops and the powerful, just like they monitor us.

Maybe we need our own Clearview, with open source face recognition and data on the darknet.

[+] pletsch|6 years ago|reply
I'm convinced that Gen-Z is going to blow all of this out of the water. That the world will eventually hit a point where there isn't anyone without something online that can come back to haunt them. And I think companies are starting to see it too.

A little anecdotal but when I was applying for jobs last summer there was a video that was brought up twice during interviews with different organizations. Both times I was told after the fact that it was just to see how I would react, all I did was tell them what happened, not that I was sorry because I wasn't, I told them I stood behind what my actions were and there was nothing I could do about it now (it hit 100k views on Instagram, not a situation where I can contain it). Both companies offered me a job.

I rewrote this a few times and I'm still not sure I got my point across, but I agree with the argument in the The Transparent Society, and I already see it shifting towards that.

[+] jborichevskiy|6 years ago|reply
> our choice is between a society where the police surveil us all, and one where we all surveil each other

This idea is explored further in Nick Bostrom's Vulnerable World Hypothesis under "Preventive policing". It's depressing but does walk through this scenario in more detail than most.

> The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk‐benefit balance of developments towards ubiquitous surveillance or a unipolar world order.

https://onlinelibrary.wiley.com/doi/full/10.1111/1758-5899.1...

[+] raxxorrax|6 years ago|reply
I do think privacy is still an option and see no evidence to the contrary. Of course it should be applied to officials in public service in their function. Not their private lives.

Because anyone with a hint of a brain should be able to extrapolate the negatives of such surveillance on their lives. We played dumb long enough on that topic.

If you are not up for that, ok, but you also should not work in a capacity that has any access to private information.

If there are companies or people regularly ignoring privacy, which still is enshrined in most countries laws, severe penalties should be applied. That is also true for the executive branch of government, which was severely neglected the last decade because of fear.

[+] vorpalhex|6 years ago|reply
I disagree with Brin, but I do think if suddenly a creepy clone of Clearview was publically accessible and aimed at the rich and powerful, we sure would see regulation happen awfully quick...
[+] mc32|6 years ago|reply
I don’t think that’s how it works.

Going by the way online mobs work, availing these tools to all citizens unencumbered makes possible all kinds of abuses.

[+] matz1|6 years ago|reply
Yes, transparency is the only practical and realistic way to deal with privacy. Information want to be free. The sooner we learn to adapt and resolves issue due to our information being public the better.
[+] harimau777|6 years ago|reply
I'm not sure that I think that a completely transparent society would be compatible with freedom. It seems to me that it would just result in the dominant culture in an area being able to punish any behavior that they dislike.
[+] JohnFen|6 years ago|reply
> He says only the latter is compatible with freedom.

I disagree with him on this point. Neither choice he presents is compatible with freedom.

[+] robomc|6 years ago|reply
That's very theoretical though. The truth is, for most people, the police aren't going to be targeting them with this, and even if they do, well the police already have numerous means of invading someones privacy.

Arguing that everyone should have those means at their fingertips, in practice, will mean the likelihood of this being used on you goes way up, and that there's no longer even the theoretical oversight that police use would have.

[+] asveikau|6 years ago|reply
The thing about law enforcement use is that for however bad civilian use is, the consequences of false positives are severe.

Missed out on a job opportunity or social interaction unfairly? That sucks.

But here we are talking potential for wrongful incarceration. And since some places still have the death penalty, theoretically it could lead to the state killing people based on misidentification.

[+] xg15|6 years ago|reply
Well, you can try that out right now: Hook up a 24h video/audio feed to your phone, then share it with your wife, ex, parents, friends, co-workers, boss and the annoying uncle from Utah that you're obligated to invite to Christmas. Also give them full access to your browsing history, messengers, purchase history and location data. Then come back to us in a month and tell us how it all worked out.

Really, no, I think this is a fundamentally bad idea. The idea of everyone surveiling everyone else can only be remotely appealing if you pretend there are no power differences in the world and everyone is equal - a fiction that the tech works for whatever reason loves to subscribe to.

In the real world, people will have vastly different amounts of understanding for whatever weird sides you have and their knowledge can have serious consequences. Only because your boss may have some weird sides himself does not necessarily mean that he will be understanding of yours - or that whatever he did will be as equally relevant as what you did.

[+] dsfyu404ed|6 years ago|reply
If the government succeeds in getting advanced ML/AI type software classified as arms (that idea comes up from time to time in the context of reducing industrial espionage) then in the US that would naturally lend itself to the argument that private ownership and operation of these sorts of surveillance systems are protected under the 2A.

The arguments traditionally used to prevent private ownership of the kind of arms that let you go toe to toe with a modern military don't really work very well for software.

[+] ausbah|6 years ago|reply
>“People were stealing our Häagen-Dazs. It was a big problem,” he said. He described Clearview as a “good system” that helped security personnel identify problem shoppers.

>BuzzFeed News has reported that two other entities, a labor union and a real estate firm, also ran trials with a surveillance system developed by Clearview to flag individuals they deemed risky. The publication also reported that Clearview’s software has been used by Best Buy, Macy’s, Kohl’s, the National Basketball Association and numerous other organizations.

this seems just like another tool that will be used to put the non-elite at a disadvantage. until proven otherwise - I can only think of negative outcomes for ex-felons, low wage workers, people of color, and the likes coming from the usage of this app by the rich and large corporations

[+] Spooky23|6 years ago|reply
> this seems just like another tool that will be used to put the non-elite at a disadvantage. until proven otherwise - I can only think of negative outcomes for ex-felons, low wage workers, people of color, and the likes coming from the usage of this app by the rich and large corporations

You lack imagination. This will be used by HR, attorneys, private investigators and others for all sorts of purposes.

- Were you really out sick? According to our partners at Foo Corporation, you were eating a churro in front of a movie theater at 10:45AM on the day you were out.

- Why were you talking to <x> in the parking lot after work?

- Our security provider identified you in a social gathering with 3 other employees, or engaged in a PDA with a fellow employee. You are out of compliance with our fraternization policy and are terminated.

- You get a 30 minute meal break. Why were you leaving the men's room at 12:45?

- You were arrested for shoplifting in 1993, we are refusing entry to <x> stadium.

- Your grades have declined, and you have been seen entering your dorm after 2AM 40% of the time, scholarship revoked.

If anything, this will bring the routine harassment that individuals in authority inflict on people and elevate it to a centrally controlled, legal practice.

[+] devmunchies|6 years ago|reply
> negative outcomes for... people of color

Wouldn’t it do the opposite? Instead of racial profiling it’s using a system that pulls from social media. I wouldn’t say it’s a good system but it seems less biased.

EDIT: OP said “negative outcomes” and not “unfair towards”. Yeah I guess something can be more fair but still have disproportionate outcomes.

[+] keanzu|6 years ago|reply
> ex-felons, low wage workers, people of color, and the likes

"people of color" are not "the likes" of ex-felons and low wage workers.

[+] notRobot|6 years ago|reply
We're now constantly under surveillance and facial recognition is being used on that footage. Apple stores. Malls. Bus stands. Grocery stores. Train stations. Traffic stops. Schools.

Privacy is dead. Anyone with money or any government can use one picture of you and get basically every piece of information about you.

Maybe this isn't true to the same extent for the HN crowd who might be more privacy conscious, but it is true for 99% of the rest of the population.

Schools and governments failed to educate about the privacy concerns. Maybe that's understandable. But they still don't. Teens post all sorts of stuff that will come back to bite them, that will never be forgotten.

[+] snarf21|6 years ago|reply
This is true. Only legislation will stop this and that seems unlikely.

The worst part is that most people upload high quality, high fidelity tagged training data on a daily basis. Add in the fact that social networking automatically builds network circles, it is not wonder we are riding a landslide towards 1984 and Fahrenheit 451.

[+] zcw100|6 years ago|reply
And they rage when the little guy does it to them. It's obvious that employers are either paying people to post good reviews or strong arming current employees to write positive reviews on Glassdoor. The good thing is they're easy to spot since they're usually some vapid garbage like "Best place in the world to work!" with 5 stars interspersed with the real 1 star reviews. You can also tell because there are usually just enough fake reviews to push the real ones off the front page.
[+] DailyHN|6 years ago|reply
Seems like they're building the hype train so that consumers want to pay for a version they can use to "take control" of their digital identity.
[+] Nextgrid|6 years ago|reply
Just like the credit reference agencies. The scum gets your data from everywhere (sometimes wrong data), shares it with whoever asks, but makes it super difficult for you to get it or rectify any wrong data.
[+] bsenftner|6 years ago|reply
Anyone that wants their own "Clearview" like app can take almost any FR application and create a database from scraping the web. All that Clearview did was pre-scrape the web for you, but to do that yourself or as an open source project of sorts is not difficult.
[+] lwh|6 years ago|reply
I wonder how legislating open algorithms as illegal will work out? If you can run this reasonably on a phone building your own face/object databases how could they stop anyone from doing it?
[+] xz0r|6 years ago|reply
If they are able to do this by creating the dataset just by scraping public social media web images, and having a facial recognition algorithm in place, whats stopping anyone else to do that ?
[+] 12xo|6 years ago|reply
I'd imagine that there are many, more clandestine versions of this.
[+] hammock|6 years ago|reply
Has anyone here tested the app and can speak to their direct experience? (throwaway or not)
[+] StavrosK|6 years ago|reply
Doesn't the title imply it's not a secret plaything now?

I guess it's not secret...

[+] sub7|6 years ago|reply
I know Hoan. He's a great guy. Very talented. Super capable. Totally trust him to deploy this responsibly.
[+] iron0013|6 years ago|reply
Despite the many examples in this article and elsewhere about him already deploying it irresponsibly? Or maybe I missed the implied “/s”
[+] peter_d_sherman|6 years ago|reply
Facial recognition technology as a tool for protecting retail establishments... Interesting!
[+] 12xo|6 years ago|reply
The problem I see with this tech is with insurance. Your life, your health, your well being your family's well being, will be dependent on your ability to hold good insurance. If everything is measured, everything is tracked, everything quantified, your life is going to be very different.
[+] aerique|6 years ago|reply

[deleted]

[+] SuoDuanDao|6 years ago|reply
I can see why you got flagged, but even being a millennial that's a response I hope catches on.
[+] dropoutcoder|6 years ago|reply
I have lived in {} since the mid 2000’s. Stalking by strangers and acquaintances has gotten out of hand in (at least) the past five years. (Any such behavior against me has since calmed down in the past year, after reworking my digital devices, but the effects have had significant impact on me. I also dropped out and gave up on life this past year, which may make me a much less interesting target to harass.)

Such technologies are part of an ongoing increase in information and power asymmetries that can be abused to harass innocent competitors, as has happened to me. I’ve had strangers come up to me in public and discuss specifics of my private life, including non public details about my since failed startup, and personal/private comms. Concurrently, I was falsely accused of a serious crime and was put under the microscope and harassed on a regular basis by strangers regarding this. It became apparent that my life was completely owned at that point, digitally and publicly. It amounted to ongoing bullying which really pushed me beyond thresholds of learned helplessness already long since established.

There seems to be no recourse against this behavior. If you have a digital “kick me” sign attached to your back, there’s little you can do to remove it, short of avoiding being in public. Or, as in my case, one can drop out of life, go homeless, give up all of your assets, and prepare for suicide. Strangers can verbally harass/own/gaslight others, maintain perfect plausible deniability, have perfect encryption to cover their tracks, and devastate people who aren’t equipped to deal with this behavior.

Evolution of survival going forward is trending towards resilience to increasingly sophisticated psychological violence and harassment, as well as the ability to accept being an unwitting voyeur in all public places.

One of the most difficult aspects to this was reporting these incidents (admittedly, under duress in the heat of the moment), and being told that I must be delusional and mentally ill. To me, the delusion is genuinely believing that technology is not used to stalk or harass people in public. As a counterpoint, I will say that being stalked repeatedly does increase your paranoia, so you’ll start to look over your shoulder at every turn. If you believe that all of your devices and accounts are hacked and being used to harass you, the complete lack of digital privacy can have a profound impact on sanity.

To this day, I’m utterly freaked out by the presence of personal cameras, to the point where I’ve nudged people in the community to be aware of the cultural impact of holding phones vertically in coffee shops or other public places. As most people are of course good natured, I’ve noticed a trend in the places that I frequent towards people being more prudent in this regard. I personally cover the public facing back camera on my phone with my index finger as a matter of habit by now, to avoid pointing it at strangers in public. Personally I believe responsibility amongst the tech elite would include immediate installation of physical shutters that open only when a camera is in use. Shutters can be colored blue or yellow, perhaps as a culturally standardized signal that the camera is “closed”.

There’s clear benefit to tech such as Clearview but the potential for abuse by irresponsible or immoral actors is tremendous. As someone pointed out, such tech can be rolled yourself. It seems that the problem is therefore out of control. Welcome to the age of unwitting voyeurism.

Edit: I did make a comment on the linked NYT article, including my real identity. In this comment, I called out at least one person involved in shenanigans against me. This person name dropped {} as someone who would recognize him, before he trashed my startup without seeing it, encouraged me to drop out of my continuing Computer Science studies at the local University (due to the bad rep I would receive for doing so as a middle age adult, so he said), and then threatened my career/reputation if I told the truth about specific stalking incidents, all in one conversation. Not long thereafter, I experienced a stalking incident in public by two men with walkie talkies who harassed me about said startup, mentioning non-public specifics about an engagement we were seeking. In retrospect, these men could have been using tech such as Clearview to more easily enable their stalking and harassment of me. The location of this incident was the playground of wealthy folks in my city’s most affluent public area. My comment on the NYT article was not approved by the moderators, understandably.

[+] RickJWagner|6 years ago|reply
Meh. You can be recognized by humans, you can be recognized by machines. I don't get the outrage.
[+] elmo2you|6 years ago|reply
> Meh. You can be recognized by humans, you can be recognized by machines. I don't get the outrage.

Put another way: "Meh, you can die from a freak accident, you can die from a well resourced contract killer. I don't get the outrage".

Same result: you die. But not quite the same thing.

[+] tsukurimashou|6 years ago|reply
Well the huge difference is, once you put cameras everywhere you can see what people did at any given moment, you can have algorithms that actively look for someone and tracks all of his movements. If you don't see the problem with that then I don't know what to say.

Sure humans can recognize people, but they can only "scan" for so much people in a crowd, they do it "live" and they don't remember everything they see.

[+] danso|6 years ago|reply
The top of the story features an example where human recognition was not sufficient:

> One Tuesday night in October 2018, John Catsimatidis, the billionaire owner of the Gristedes grocery store chain, was having dinner at Cipriani, an upscale Italian restaurant in Manhattan’s SoHo neighborhood, when his daughter, Andrea, walked in. She was on a date with a man Mr. Catsimatidis didn’t recognize. After the couple sat down at another table, Mr. Catsimatidis asked a waiter to go over and take a photo.

> Mr. Catsimatidis then uploaded the picture to a facial recognition app, Clearview AI, on his phone. The start-up behind the app has a database of billions of photos, scraped from sites such as Facebook, Twitter and LinkedIn. Within seconds, Mr. Catsimatidis was viewing a collection of photos of the mystery man, along with the web addresses where they appeared: His daughter’s date was a venture capitalist from San Francisco.

> “I wanted to make sure he wasn’t a charlatan,” said Mr. Catsimatidis, who then texted the man’s bio to his daughter.