top | item 24627363

Social Cooling (2017)

2692 points| rapnie | 5 years ago |socialcooling.com

1058 comments

order
[+] 40four|5 years ago|reply
I think this is a good example of how pro-privacy arguments should be framed. It is takes the varied aspects and complex implications of tracking users across the web (or even in the real world), and distills it down into an easy to understand concept.

When you think privacy of in in the terms of 'social cooling', or consider things like China's 'social credit' system, I can't help be think we are much closer to the world depicted in the last season of Westworld than we might want to admit.

[+] bonestamp2|5 years ago|reply
Agreed. I think the audience matters too -- different messages appeal to different people.

My dad is one of those old school guys who thinks law enforcement can do no wrong and nobody needs to hide anything unless they're doing something wrong. Even if that were true and I think it is true that many law enforcement personnel are trying to do good, that doesn't always mean the results will always reflect their intentions. When the sample size of facts is too small, as is often the case with mass collection, it's too easy for your sample to get mixed up with someone else's. Maybe your phone is the only other phone in the area when a murder is committed. That doesn't mean you did it, but it sure makes you look like the only suspect.

I was never able to gain an inch on his argument until I asked him why he has curtains on his living room window. I mean, it faces North, so there's no need to block intense sunlight, yet he closes them every night when he's sitting there reading a book or watching TV. Why? He's not doing anything illegal, yet he still doesn't want people watching him. He said he would not be ok with the Police standing at his window all night watching him. That's when he finally understood that digital privacy is not just for criminals, but for everyone who wants to exist in a peaceful state and not a police state.

[+] smolder|5 years ago|reply
Right. Apart from the sci-fi tropes, the extreme drama, and aesthetics, it's a spitting image. A great deal of effort is quietly spent on social control, keeping things as they are, and extracting value from people-as-cows, both here and there. Any technology in a position to add robustness to that system, to reduce its upkeep effort, or improve its efficiency at generating wealth for the privileged is likely to succeed, so it's reasonable to think some of the not-yet-here but possible aspects their world will make it to ours in time.

Sometimes I think that authors who see patterns and make reasonable but dire predictions about where society is going actually end up providing a game plan to career oppressors.

[+] bogwog|5 years ago|reply
Yes, this was great. I think the slogans "Privacy is the right to be imperfect" and "Privacy is the right to be human" are both great, relatable, non-controversial, and easy to understand.
[+] joe_the_user|5 years ago|reply
Considering we're see "social heating" if not "social fire" all around us, I'm not sure this is informs people correctly.

My local Facebook group seethes with an angry discussion just below threats of actual violence - and the actual violence was on display only a short time ago when Back The Blue physically assaulted a black lives matter demonstration (in a smallish city where "BLM" is just earnest liberals as you'd expect). And the miscreants were readily identifiable by Facebook (which hurt their business if nothing else but still basically weren't all that bothered by the situation).

Another thing about the heated local-group arguments is that few people have a good idea how unprivate their situation really is. The paranoia of Bill Gates "microchipping" people is a cartoonish example but there's a vast group people very concerned with privacy but having close to no understanding of what it actually involves (or how much they don't have).

If anything, the noxious effect of massive collection is most evidenced by micro-marketing of a variety of crazed ideas to those most susceptible to them - and employers and landlords being able to harass their own employees for particular things they object to (but lets a lot of things through, and business owners have less to worry about).

[+] ptg473|5 years ago|reply
this is the kind of privacy discourse I am interested in. Whether an individual can find my ssn, location, credit cards, or whatever personal information is not really what I am thinking about when I think about “protecting my privacy” but rather reducing my data emissions that compose these ratings. in my experience it’s hard to get this across to people who are not familiar though, always get the “I have nothing to hide :) what are you trying to hide?” response. Will try this “social cooling” framework next time. maybe a little less daunting as an entry point than “surveillance capitalism”
[+] OpticalWindows|5 years ago|reply
> When you think privacy of in in the terms of 'social cooling', or consider things like China's 'social credit' system, I can't help be think we are much closer to the world depicted in the last season of Westworld than we might want to admit.

We were 'almost' there 20 years ago. We are firmly near Westworld (everything outside of androids).

[+] woeirua|5 years ago|reply
If there's anything that gives me hope that we can avoid a dystopian future driven by social media, it's that Deep-learning / AI is being used to cheaply create realistic forgeries of just about everything: profile pictures, text, profiles, voice recordings, etc.

Within the next 10 years, and maybe much sooner, the vast majority of content on FB/Twitter/Reddit/LinkedIn will be completely fake. The "people" on those networks will be fake as well. Sure there are bots today, but they're not nearly as good as what I'm talking about, and they don't exist at the same scale. Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.

IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.

[+] floatrock|5 years ago|reply
My family grew up behind the iron curtain. At a family event once I heard someone tell a story that I think has been the most accurate prediction of the last few years (if anyone knows the actual interview event, please tell me more so I can get the exact wording, this is all paraphrasing from childhood memories).

A western reporter travelled to the other side of the iron curtain once and was doing what he thought would be an easy west-is-great gotcha-style interview. He asked someone over there, "How do you even know what's going on in your country if your media is so tightly controlled?" Think Chernobyl-levels of tight-lipped ministry-of-information-approved newspapers.

The easterner replied, "Oh, we're better informed than you guys. You see, the difference is we know what we're reading is all propaganda, so we try to piece together the truth from all the sources and from what isn't said. You in the west don't realize you're reading propaganda."

I've been thinking about this more and more the last few years seeing how media bubbles have polarized, fragmented, and destabilized everyone and everything. God help us when cheap ubiquitous deepfakes industrialize the dissemination of perfectly-tailored engineered narratives.

[+] helen___keller|5 years ago|reply
> IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online

On the flip side, successful startups that aren't full social but do require some authenticity verification have already been proven: nextdoor and blind, for example

I think the biggest issue is scaling to a facebook-style, reddit-style, or twitter-style "full-world" social network implies colliding people who have no other relationship or interaction but are linked through a topic or shared interest

And, in my opinion, when you hit a certain level of scale, the verification almost becomes pointless: there's enough loud angry and troll people out there that I dont think it matters if they're verified or not. You can't moderate away toxicity in discussions that include literally a million participants.

I think you need both verification and some way to keep all the users' subnetworks small enough that it isn't toxic or chilling. But then you lose that addictive feed of endless content that links people to reddit or Facebook or Instagram. Tough problem

[+] vasco|5 years ago|reply
You mention realistic forgeries, AI and huge volume as a possibility and that the outcome would be that people would be pushed into the real world but I'm not sure I see the connection.

If I can interact with bots that emulate humans with such a degree of realism, what do I care? You could be a bot, the whole of HN can be bots, I don't really care who wrote the text if I can get something from it, I mean I don't have any idea who you are and don't even read usernames when reading posts here on HN.

At its core this seems like a moderation issue, if someone writes bots that just post low quality nonsense, ban them, but if bots are just wrong or not super eloquent, I can point you to reddit and twitter right now and you can see a lot of those low quality nonsense, all posted by actual humans. In fact you can go outside and speak to real people and most of it is nonsense (me included).

[+] Sargos|5 years ago|reply
> IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.

Any kind of widely used identity/authentication system would need to be a protocol and not a product of a for-profit corporation. Businesses take on great risks if they use another corporation's products as part of their core operations as that product owner can change the terms of service at any time and pull the rug out from under them. A protocol is necessarily neutral so everyone can use it without risk in the same way they use HTTP.

For identity protocols I think BrightID (https://www.brightid.org/) is becoming more established and works pretty well.

[+] jeremyjh|5 years ago|reply
See also Neal Stephenson's Fall: Dodge in Hell. What happens there though isn't authentic experiences but instead people buy tailored human/AI agent filters called editors to construct a reality for them by filtering out most media sources, including billboards and other interactive real-world advertisements and media screens. This way each individual has their own media reality.
[+] 542354234235|5 years ago|reply
> Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.

Will they? People interact with these things because they are giving the brain what it wants, not what it might need. How many people would flock to a verified minimal bias news site? How many people would embrace so many hard truths and throw off their comforting lies? How many people could even admit to themselves they were being lied to and had formed their identity around those lies?

Do people want authentic now? The evidence says no.

[+] chmod775|5 years ago|reply
That's just digital certificate-based government ID. You could maybe provide some layer of abstraction above it to improve the developer experience, but at the end of the day you're reliant on it existing. Everything else will be too easily forged (unless you're planning on doing in-person validation).
[+] jberryman|5 years ago|reply
But bots and spam and russian memes are already deeply engaging to people. I'm sure it will only get worse, though obviously some people will opt out.
[+] paulvorobyev|5 years ago|reply
>IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.

The US government does authentication in real life via social security numbers. Of course, they are not very secure: a government-operated SSO or auth API for third-party applications would be a logical next step.

It would guarantee uniqueness and authenticity of users. Even better, if this were an inter-governmental program, it would deter government meddling: a state issuing too many tokens for fake accounts would arouse suspicion.

[+] deeeeplearning|5 years ago|reply
>Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.

I think you have completely misread the situation. The "fakification" of social media is already happening. Much if not most engagement is already driven by bots or by fabricated "influencers" and more people are using these platforms more often, not less.

[+] ekianjo|5 years ago|reply
> Once that happens, the value of those networks will rapidly deteriorate as people will seek out more authentic experiences with real people.

Not so sure. I'd rather wage that people won't really care about whether they interact with real humans or not. Why would it matter? It's not rare for people to relate and feel emotions for virtual characters in video games - even though they are perfectly aware it's all fake! The same can be said for movies, TV shows. You know it's fake, yet you watch and enjoy. I'm not sure why it would be ANY different for social networks which are basically just another form of entertainment.

[+] 12xo|5 years ago|reply
This is very interesting. So basically, we'll all use fake personas managed by AI. And nothing online will be real...
[+] malandrew|5 years ago|reply
> IMO, there's a multibillion dollar company waiting to be founded to provide authenticity verification services for humans online.

Ironically accounts with Twitter's blue check mark are often the accounts most likely to be managed by a social media manager.

[+] mola|5 years ago|reply
Really? People censoring themselves is the problem? Whenever I take a peek at social feeds I see people saying crazy things, insults, conspiracy theories, hate, etc. Usually I end up the feeling that the larger the audience and concurrency of engagement, the less people censor the them selves, it usually even make them see extra things that normally they won't say.
[+] imdsm|5 years ago|reply
Perhaps people censoring themselves is the reason you see crazy things, insults, conspiracy theories, hate, etc. The rational and well-mannered people aren't taking the risk so all you hear is those who will take the risk.

It's why politics is full of goons. Who in their right mind would go into that arena, to do good, when the risks are so high, the exposure so great, the hatred so guaranteed? Just the wrong people willing to take the risk.

[+] hudon|5 years ago|reply
Frequent in-person discussions between people with different opinions tends to make people compromise and find nuance more easily. However if one side of the discussion is self-censoring, then both sides will tend to develop extreme opinions without any means to tamper them. As such, what you are describing is actually evidence to support the self-censorship hypothesis, not refute it.
[+] josefresco|5 years ago|reply
I'd be interested in figuring out how I can use this to my advantage. For example, create a persona online that is optimal to lenders, employers and even the government.

The issue is my "real self" is uninterested in participating in these networks, even if to create a fake persona.

Maybe it could be automated, or outsourced?

[+] socialcooling|5 years ago|reply
Creator of socialcooling.com here. You may enjoy this other website I created:

https://www.cloakingcompany.com

It's a fictitious company that helps you do exactly this. And while it's fiction, the tool actually does work.

[+] renewiltord|5 years ago|reply
It has no bearing on anything as far as I can tell. For decades I've been open about my drug use, lack of care for people less fortunate than me, anti-organ-donation, anti-first-lady, illegal importation of pharma, and a hundred other things.

I have no problem accessing a $1.5 million mortgage at 2.875%, getting prescribed drugs, or immigration beyond whatever is inherently hard about the system.

The best way is still the real information. The hard stuff in the real world. What you do online does nothing.

Except maybe the Tinder thing. Most dating apps align your attractiveness with the attractiveness of potential targets. That's to be expected.

The way I see it is "Information wants to be free".

[+] postsantum|5 years ago|reply
Was thinking the same. I wonder if there is a market selling "ready to move in" identities
[+] gorgoiler|5 years ago|reply
This is wire fraud, comrade.

All citizens who lie about being cat owning church going knitting enthusiasts — regardless as to whether it was to get a better rate on their next car lease, or not — will be incarcerated.

This may be reduced to a small fine (and denouncement) if you forgo your right to the wasteful scrutiny of a public trial.

Glory to Arstotska

[+] tootie|5 years ago|reply
I don't think that would really fly. You may get served a higher class of ads, but if you go apply for a loan or a job, you still have to disclose your real self.
[+] julienb_sea|5 years ago|reply
This whole concept seems overdramatic to me at least at present. Banks are making lending decisions based on steady income and payment history, not your online persona. Similarly for employment. If you have reasonable qualifications, you will have no trouble finding work, regardless of how "optimal" your persona is.

Advertising is the area in which the most persona research and targeting is implemented. I suspect the reason no one is trying to fake online personas is because it would only have noticeable impact on what ads you see.

[+] jfarmer|5 years ago|reply
Hah. Reminds me of Gattaca.
[+] thegrimmest|5 years ago|reply
Is it wrong to suggest that this (if accurate) is a positive trend? I would like to live in a society where people spend more time considering what they say publicly, keeping to themselves, and refraining from imposing their thoughts and opinions. Live and let live.

If you want to have a private conversaion, social media doesn't seem to be a good vehicle for it. Much like airing your dirty laundry in the town square has been considered bad etiquette, airing personal greivances on the internet seems to be in poor taste.

It must be noted that manners never arise sponaniously in culture, but becuase people fear the consequences of breaching etiquette. I for one welcome the return of politeness to society.

[+] keiferski|5 years ago|reply
This is a good site, but it leaves out the fact that the traditional mass media itself has enforced certain opinions, which subsequently leads to a chilling effect.

Culturally, we need to get to a place where words aren't considered a form of violence, and where mere discussion of controversial ideas isn't shot down for "giving the enemy a platform." The concept of a calm debate really needs to make a comeback.

"It is the mark of an educated mind to entertain a thought without accepting it."

- Aristotle (paraphrased)

[+] shadowgovt|5 years ago|reply
Gotta be honest: I don't have to spend more than 5 minutes on Facebook to dissuade myself of the hypothesis that, on average, people are feeling constrained about what they're saying.
[+] drdeadringer|5 years ago|reply
> If you feel you are being watched, you change your behavior.

I feel like this has been known for a long time. For example: If you walk into a Kindergarten class and watch the children play, once they notice you watching them they change behavior away from "natural play" to "observed play". I believe Cory Doctorow made this observation a spell ago.

Edit: I'd like to add that one of my parents was a teacher in a school with two-way mirrors for observation. People could secretly observe a given class in session either for observing the teacher and//or observing the students live but without the "observer effect". The entire school building was designed for this purpose and whilst everyone knew it it appeared to work as intended. "Out of sight is out of mind" is real. Yes, this particular parent was on both sides of the glass.

[+] tboyd47|5 years ago|reply
This is exactly why I had to get off of Facebook (again).

I deactivated my first account 8 years ago, but got back on to re-connect with my old pals and acquaintances from back in the day. For that reason, it was fantastic.

After another year, I realized that I can't actually say ANYTHING interesting on this platform without offending someone. There's a lot of variety in my crowd. I have the sense IRL to know that not everything is for everybody, but that doesn't matter much on Facebook unless you want to spend hours and hours hand-crafting subsets of your friends for different topics (I don't). And I have zero interest in posting selfies or status updates of what's going on in my life, so that made the platform exceedingly boring and a waste of time for me. It's a shame, because it does work really well for "connecting" with people (in the shallowest sense of the word).

[+] WillDaSilva|5 years ago|reply
The point about minority views no longer being able to take over is a scary one. There has been a great amount of social progress in the past several decades, and that sort of progress wouldn't be possible under the effects of strong social cooling.
[+] captainbland|5 years ago|reply
Dare I say it, this same thing likely happens on this very website. People seek jobs directly off hacker news, so those people are likely to avoid saying anything that might alienate a potential employer.
[+] auggierose|5 years ago|reply
Hey dang, I've seen you make these "multiple pages" comments a few times now. Maybe it is just time for a UI that fixes that?
[+] alex_young|5 years ago|reply
This is why Real Names is such an evil idea.

Yes, I’m using a strong word. Evil actually means something in this context though.

Real Names is a way to lock your social behavior to your persona, and then to sell that data in real time to the highest bidder.

Forums such as this one allow me to use my real name if I want to, but because they don’t require this, they have no way of algorithmically associating Alex Young the person with alex_young the account.

[+] notacoward|5 years ago|reply
Ironically, one of the things that's worst about being online is often the lack of social control. By now, just about everyone has hadone of their previously normal-seeming friends or relatives go on an insane political rant on Facebook, or had a Twitter troll show up in their replies, or read just about any comment on YouTube. People act in these horrible ways because they can, because real or effective anonymity lets them do so without disapproving looks from people whose approval matters to them.

The solution to privacy issues is not to make everyone anonymous. (Nobody ever actually puts it that way, but a lot of people suggest solutions that basically amount to the same thing.) Under-identification is as much of a problem as over-identification. Reputation and social pressure also prevent a lot of bad behavior. For that to happen, we still need people's identity to have some continuity ... and that's where pseudonyms come in. Go look at the examples in the OP. Practically all of them involve some kind of "leakage" from one part of a person's life to another. This is the same problem that has existed since before computers, with people having safe persistent identities within one community until they're "outed" to the broader one. If people had more control over the different parts of their identity, to connect them or not as they see fit, these things couldn't happen. Better technical and social support for pseudonyms might not be a panacea, but it would certainly go a long way.

[+] azanar|5 years ago|reply
I'm going to ask a question that I fear will have me labelled as naively privileged almost beyond any hope of my eventually redemption.

Are we as individuals hopelessly trapped in a social fabric that leads to the kinds of bad outcomes based on abuse of data that the author describes?

Assuming we can escape, is our only way out of this fabric to shred it from within? What of the benefits that we shred in our zeal? Is it mistaken to even claim their are benefits to be weighed against the drawbacks, because the drawbacks are so bad?

Perhaps it is a naive question. Is there a way we can reduce the bad outcomes by making those that cause them irrelevant, rather than counter-engaging them directly?

[+] ry454|5 years ago|reply
Comments on HN is an example of how this cooling effect works. It takes only a few upset readers to take your comment down, so if what you say deviates even slightly, by 0.01 sigma, from the boring mainstream viewpoint, you'll upset at least a few readers.

Same idea, but from another angle. It's well known that you can say a lot in a small group, but very little in a large group, because it's a lot more likely that someone in a 1,000 person conference will be offended by your words. With internet and social networks, you have to assume that you're always talking to the entire western world, and there's a nearly 100% chance that some angry activists will be offended, so you always have to calibrate your talking points to the most boring mainstream viewpoint.

[+] motohagiography|5 years ago|reply
If the Varian Rule is true, that what the rich have today, the middle classes will have in 5-10 years and the poor in 10-15, it's worth noting that what the rich have today is private security.

The real risk is that the ultimate popular reaction to these systems will not be civil.

[+] tony_cannistra|5 years ago|reply
I like the climate change comparison.

One of the opportunities for comparison that this site only barely touches on is the fact that, like climate change, the companies responsible for this global phenomenon both know it's happening and are likely actively working to avoid talking about it. This happened with Exxon, BP, ConocoPhilips, you name it; it's now happening with Facebook, Google, etc.

This undoubtedly happens because any change for the good of folks would undermine these powerful corporations' bottom lines.

What can we learn from our failure to hold fossil fuel corporations accountable that can be translated here?