155,000 users for each treatment of the experiment, says the paper. Let's presume random selection, and that the occurrence rate of mental disorders is the same for Facebook users as the general public (probably not too far off). Then Facebook intentionally downgraded the emotional state of:
10,000 sufferers of Major Depressive Disorder
5,000 people with PTSD
4,000 bipolar disorder sufferers
1,700 schizophrenics
and a plethora of others with various mental disorders.
11/100,000 people commit suicide each year in America. How many were part of that treatment of the experiment, without consent or knowledge?
As a scientist, I'm fascinated by the research. As a human being, I'm horrified it was ever done.
I'm even more concerned about the friends of those participating in the experiment, particularly those that were part of the group filtering out more negative messages.
If a friend made a post deemed negative, that would otherwise signaled to close friends to check up on that person and intervene if necessary, there's a very good chance these messages would have been filtered out in a system like this.
The potential affected population becomes much greater than the 155,000 being experimented upon then, and would have put a much greater number of people at risk of not having any close friends or others available to intervene, who would have otherwise been able to do so if the facebook algorithm hadn't been altered to follow this bullshit happiness metric for research purposes.
I really hope this becomes much bigger news, and some action is taken to ensure something like this doesn't happen again; but considering the lack of funding and attention given to mental health, especially in the work-till-you-drop business world, i sadly doubt it.
i have bipolar disorder, and i barely made it through an intense schizoaffective episode where i heard many voices, felt like my consciousness was splitting into multiple parts, and was terrified. this was just two months after my startup exited and i got nothing in 2012.
oh yeah - i'm also a startup founder, worked at uber, google and microsoft - now facebook. i don't speak for them. just me.
i'd like the world to understand me better. so much of what i've struggled with is mood-based.
you know that guy - http://www.losethos.com/ - i know _exactly_ what he means. i understand him when i hear him speak, and i feel bad for the guy. it's scary being where he is, "knowing" how powerful and right you must be, and knowing how people laugh at you behind our back - but you know you're right, that you're a conduit for god.
do you think it makes sense for him to be stuck like that? i sure as hell don't.
i'm sorry, i'm getting emotional here. this is hard for me.
i can't speak for my employer here. but i will tell you that your mindset of me as a victim who must not be upset - it can be more than a little offensive. fortunately through all of my experience dealing with these issues, i've learned how to better manage my emotional states, and i've also learned to see emotion as form of sensory input - like light and sound. i don't believe everything i hear, or see - why should i believe everything i think or feel?
if facebook was making up random shit that was negative and showing it to their users - that sucks. if they were making falsely positive posts - forging your friends activity - that also sucks.
but when they are selectively showing you portions of your friends' activity - something they were already doing anyway - it's wrong to say that they have "intentionally downgraded " my emotional state. if my friend says she's having a shitty day, she's not intentionally downgrading my state. she's having a bad day. if facebook hides that from me, are they making my day better? are they making her day worse? it's not really clear here. people know facebook adjusts their posts, and they did show that people who are exposed to negative content are less likely to post positive things. does this mean that the users are themselves feeling less positive? or are they just trying to keep with the tone of the social area they're in? it's not sure.
our culture does not understand emotion - i think this is a serious problem and we really need to do something about it. that templeOS guy is not enjoying life or functioning nearly as well as he could if he were not shunned for being so wildly antisocial. you know who else gets shunned for being antisocial? people who say things like 'i am sad', 'i feel lonely,' etc etc, in public.
i'm sorry for the tone of this - it's - it's hard for me to stay calm here.
but let's look at this as if emotion were the "same kind of thing" as light or sound. spreading a negative emotional reaction to this article and saying you are "horrified" that someone who was depressed had more people see their depressed posts - you're contributing to the problem.
it feels to me that emotion has some 'conservation' like properties; you can't diminish a lot of negative emotion at once. it also seems to 'move' places; negative emotion between people who interact seems to get pushed to scary places where lots of fear and hate are concentrated. in late 2012 i felt like all of the evil in the world, all of the hate was being shoved into me because i told the world i could take it, i told the world i didn't want them to hurt like i did.
i heard voices telling me to kill, and i wanted to kill myself rather than hurt someone else. i thouht icould take myself and the voices out with me - and then i'd heard the voice of my parents and loved ones calling to me.
when i see those shootings, where some loser with no friends and no hope goes out and kills a bunch of people - i feel like it happened to them, and rather than blame themselves for the horrible shit being pushed on them, they blamed the outside world.
but they're just as much victims - we're all victims - of our misunderstanding of emotion.
i'm sorry, i know you didn't ask for me to be upset at you, i know oyu mean well, i'm just.
i'm tired.
i want people to understand this stuff better because the better i've come to understand it, the better i've functioned in life. the ability to remain calm - an ability i have not exercised here because it impedes the ability to express genuine content - that ability is invaluable and a huge source of power.
energy moves from a heat reservoir, a bunch of pissed off angry furious people, to a cold rerservoir - a room full of sociopaths who use fear and anger to extract energy from a warmer place, the heat engine carnot-cycle of samsara and the transition from golden age to successsive yugas is just adiabatic/isothermic expansion and contraction of emtional content
shit this is making no sense.
so you see where i'm going - i'd like to understand this stuff better. i hope the study helps.
FB _already_ filters out updates based on some blackbox algorithm. So they tweaked the parameters of that algorithm to filter out the "happier" updates, and observed what happens. How is this unethical? The updates were posted by the users' friends! FB didn't manufacture the news items; they were always there.
I detest FB as much as the next guy, but this is ridiculous.
In this case, I don't think there was actual risk but just reading the PNAS paper it doesn't sound like the study went through the proper process. If it was reviewed by an IRB then it did go through the proper process and it's ethically sound, but a PR nightmare.
As I understand it, the American scholarly tradition uses a 'bright line' definition of human experimentation that stresses getting ethical review board approval for pretty much everything involving humans other than the experimenter. Doing anything that needs review board approval without getting review board approval is seen as highly unethical, even if approval would clearly be granted were it requested.
For example, I once saw a British student e-mail out surveys about newspaper-buying habits; one was sent to an American academic, who replied to the student's supervisor saying the student should be thrown out of university for performing experiments on humans without ethical review board approval.
I strongly hope that they won't care about any of this "outrage" and continue to do more and more experiments. Maybe even open it up as a platform for scientists to conduct studies.
Facebook is in the unique position of possessing data that can be orders of magnitude more useful for social studies than surveys of randomly picked college students that happened to pass through your hallway. There's lot of good to be made from it.
But the bigger issue I see here is why it's unethical to "manipulate user emotions" for research, when every salesman, every ad agency, every news portal and every politician does this to the much bigger extent and it's considered fair? It doesn't make much sense for me (OTOH I have this attitude, constantly backed by experience, that everything a salesmen says is a malicious lie until proven otherwise).
It's an interesting question. I have the same averse reaction to this story that a lot of people here have, but I admit I also thought, "If Facebook hadn't published this as research, but just had it as a business decision to drive more usage or positive associations with the website, no one would care."
My own way to reconcile this -- and I admit it's not a mainstream view -- is that advertisement and salesmanship should be considered just as unethical. I don't know how to quantify what "over the line" is, but it all feels like brain-hacking. Things like "The Century of the Self" suggest that in the past century or so we've become extremely good at finding the little tricks and failings of human cognition and taking advantage of vulnerabilities of our reasoning to inject the equivalent of malicious code. The problem is that when I say "we" I don't mean the average person, and there's an every-growing asymmetry. Like malware developers adapting faster than anti-malware developers, most people have the same level of defense that they always have had, while the "attackers" have gotten better and better at breaking through defenses.
Sometimes I'll see discussions about "what will people centuries from now think was crazy about our era?" and there's a part of me that keeps coming back to the idea that the act of asymmetrically exploiting the faults of human thinking is considered normal and "just the way things are."
It's a matter of algorithmic scale. What would be the result of a social network (or anyone really) creating fictional users for the purpose of running social experiments on humans whose permission was not requested?
The speech of one salesman/politician is different from thousands of machines impersonating human speech. Speech is protected in some countries. Fake speech (e.g. experiments/spam) decreases the trust of humans in all speech on the network.
In anyone wants to run large-scale experiments, they can:
(a) ask for volunteers
(b) pay for labor
Those who want to volunteer or microtask can (a) opt-in for money or fame, (b) disclose that they have opted-in, so that others know that their conversations are part of an experiment.
Why is advertising (e.g. a promoted tweet) differentiated from non-advertising? Why is "disclosure" required in journalism? Why are experiments differentiated from non-experiments?
The act of observation changes the behavior of the observed. If experiments are not disclosed and clearly demarcated, users must defensively assume that they may be observed/experimented upon, which affects behavior in the entire network. As a side-effect, this pollutes any conclusions which may be drawn from future "social" research.
I would think that everyone expects salespeople, ads, news, and politicians to be acting in their own interest, and therefore takes precautions to counteract that.
Facebook, though, pretends to be a communication service. The general expectation from a communication service is that it transmits information between users, thus users expect that what they see or hear coming out of the communication service is what the person at the other end has said/that the person at the other end will see what they say. A service that doesn't provide that isn't really a communication service at all, by definition - and lacking any other uses, isn't really good for anything. Just imagine a phone company with advanced speech recognition and synthesis software in the line that rewrites your conversations to be happier (or any other quality that the company or its customers prefer).
Facebook’s Unethical Experiment: It intentionally manipulated users’ emotions without their knowledge.
I'm not defending Facebook or the experiment, but if you're going to call them out for "manipulating users' emotions without their knowledge", then you need to call out every advertising, marketing, and PR firm on the planet, along with every political talkshow, campaign, sales letter, and half-time speech...
> then you need to call out every advertising, marketing, and PR firm on the planet, along with every political talkshow, campaign, sales letter, and half-time speech...
Except that in every one of those examples, you know you're being fed a worldview slanted by the author for their purpose. They may try to appear unbiased, but you still know that they have a worldview that is in their best interest to share. That makes it completely different.
With that in mind, I'd find it important if our social media analytics publicly accounted for such emotional manipulations.
It could be wise to begin to share such emotional-information influences to equally let users and admins be so sensitive (and acknowledge its a real part of our systems).
To say this response is unnecessary and unfounded is disingenuous. Marc Andreessen (whom I respect) tweeted "Run a web site, measure anything, make any changes based on measurements? Congratulations, you're running a psychology experiment!" I could not disagree more.
This isn't simply a matter of looking at metrics and making changes to increase conversion rates. The problem is that the whole of users have come to expect Facebook to be a place where they can see any and all of their friends' updates. When I look at an ad, I know I am being manipulated. I know I'm being sold something. There is no such expectation of manipulative intent of Facebook, or that they're curating your social feed beyond "most recent" and "most popular", which seemingly have little to do with post content and are filters they let you toggle.
What FB has done is misrepresent people and the lives they've chosen to portray, having a hand in shaping their online image. I want to see the good and the bad that my friends post. I want to know that whatever my mom or brother or friend posts, I'll be able to see. Someone's having a bad day? I want to see it and support that person. That's what's so great about social media, that whatever I post can reach everyone in my circle, the way I posted it, unedited, unfiltered.
To me this is a disagreement between what people perceive FB to be and how FB views itself. What if Twitter started filtering out tweets that were negative or critical of others?
There are better ways to question the ethics of the experiment. Here's a simple approach:
Would anyone want their emotions manipulated to be unhappy or unhealthy?
The corollary in a medical experiment would be, would a healthy person want to undergo an experiment that could make them sick?
Some people mentioned advertising as a counterpoint, that what Facebook does is not at all different from advertising's psychological manipulation. Well maybe some forms of advertising ought to be regulated too. Would a child voluntarily want their emotions manipulated by a Doritos ad to make the sicker or fatter?
Even if it's not known what the outcome is, the two points are:
(1) Facebook's various policies specify you will randomly participate in their studies, but
(2) It matters if an experimental outcome can harm you.
So even though you agreed to participate in experiments, you weren't told the experiments could hurt you. That is a classic medical ethics violation, and it ought to be a universal scientific ethics violation.
> place where they can see any and all of their friends' updates.
But that's the thing - Facebook have been manipulating your feed for years based on what it thought you would be interested in: favoring popular posts, posts from those you interact with frequently, posts that it thinks could be popular etc. There's always been 'most recent' which is a more accurate timeline, and as far as I know Facebook never manipulated that.
While I neither agree nor disagree with whether the study was right or not, it wasnt with this study that they 'misrepresented people and their lives'. Facebook has been doing that for years!
NOBODY thinks that Facebook is a place to see all your friends' updates. And it has never been anything like that.
What Marc was getting at was that this is A/B testing. Everybody does A/B testing these days. Claiming it is "unethical" because the B group might be less happy than the A group is ridiculous. That's essentially the whole point of A/B testing. Try two things and see what makes your users happier, then you can do more of that.
It's also worth noting that this is not (contrary to Andreessen's disingenuous tweet) a case of a website accidentally tripping over scientific-ethics norms through its normal course of operations, unaware that what they're doing might be considered a psychology study.
This was explicitly a psychology study, performed by professional psychologists, for the purpose of collecting data publishable in a journal! The lead author (and the Facebook employee on the project), A.D.I. Kramer, has a PhD in social psychology. I think it's perfectly reasonable in that setting to expect the researchers to be following the norms of scientific ethics.
The difference between Facebook and Twitter on your last point is that Facebook does it for you like a gentleman, and twitter has you pick feeds which will never disagree with you manually.
Either way you're just going to listen to the least dissenting material you can find, might as well let them figure it out for you.
Maybe the difference is between moral good with questionable ethics, and moral mediocrity with unquestionable ethics.
When you publish a paper, you are supposed to write in the body of the manuscript if it's been approved by an IRB and what their ruling was. I'm surprised it was published without this, even though it apparently was?
It's also appropriate to address ethical issues head-on in a paper about a study that may be controversial from an ethical perspective.
If it really was approved by an IRB, then the researchers are ethically in the clear but totally botched the PR.
The difference between this experiment and advertising or A/B testing is _intent_. With A/B testing and advertising, the publisher is attempting to sway user behaviour toward purchasing or some other goal which is usually obvious to the user.
With this experiment, Facebook are modifying the news feeds of their users specifically to affect their emotions, and then measure impact of that emotional change. The intention is to modify the feelings of users on the system, some negatively, some positively.
Intentional messing with human moods like this purely for experimentation is the reason why ethics committees exist at research organisations, and why informed consent is required from participants in experiments.
Informed consent in this case could have involved popping up a dialog to all users who were to be involved in the experiment, informing them that the presentation of information in Facebook would be changed in a way that might affect their emotions or mood. That is what you would expect of doctors and researchers when dealing with substances or activities that could adversely affects people's moods. We should expect no less from pervasive social networks like Facebook.
Every single time Facebook changes anything on their site it "manipulates users' emotions". Show more content from their friends? Show less? Show more from some friends? Show one type of content more, another less? Change the font? Enlarge/shrink thumbnail images? All these things affect users on all levels, including emotionally, and Facebook does such changes every day.
Talking about "informed consent" in the context of a "psychological experiment" here is bizarre. The "subjects" of the "experiment" here are users of Facebook. They decided to use Facebook, and Facebook tweaks the content it shows them every single day. They expect that. That is how Facebook and every other site on the web (that is sophisticated enough to do studies on user behavior) works.
If this is "immoral", then an website outage - which frustrates users hugely - should be outright evil. And shutting down a service would be an atrocity. Of course all of these are ludicrous.
The only reason we are talking about this is because it was published, so all of a sudden it's "psychological research", which is a context rife with ethical limitations. But make no mistake - Facebook and all other sophisticated websites do such "psychological research" ALL THE TIME. It's how they optimize their content to get people to spend more time on their sites, or spend more money, or whatever they want.
If anyone objects to this, they object to basically the entire modern web.
Exactly. I find this situation to be an example of ridiculous pattern matching. Is it published? Then it's a psychological experiment, and needs to be evaluated by an ethics board. Is it just A/B testing? Then it's not "science", so no need for ethics board.
I'm torn about this. In some ways, I can see how mental health issues can be detected which can hopefully help us avoid these horrifying events (mass shootings off the top of my head). But then again, I can see how the Army or the government in general can control any type of popular uprisings. FB, Twitter, etc have given us tools to connect and join in efforts to fix what is wrong (I'm thinking the Middle East though that can be said about the Tea Party or even Occupy movement). If the price is right, FB can hand over that power (i.e. NSA) or through these secret courts, the Army/government can have direct control of FB. It's crazy to think that this only occurs in countries like Russia and China but wake up America! This is happening here as well!
You know why I think they are doing this? Because there have been studies showing that people are miserable on facebook (see below) and I think people are starting to pick up on it. So FB feels some pressure to lighten the mood a bit. But as usual they do it with the subtlety of a drunken fool.
Also, the comparison to an A/B test is a false one. This is specifically to alter the moods of the user and test the results in a study, not to improve the users experience or determine which app version works better.
> Facebook intentionally made thousands upon thousands of people sad.
Hang on. Wasn't the experiment to see whether users would post gloomier or happier messages respectively? This very different from intentionally making people sad.
This study really makes me feel vindicated for unfollowing all of my friends along with every brand on facebook. I could've been part of the study but I'd never know, since the only way I see my friends' posts is to visit their pages directly where I can see them all unfiltered. I've been doing this for the past six months and it has dramatically improved the way I interact with the site. I can still get party invites and keep in touch with people, but I'm immune to the groupthink.
I have a feeling a lot of college courses on research methods are going to use this as an example of a grave ethics breach for years to come. With an experiment group as large as they used, statistically it's almost inevitable that someone in that group will commit suicide in the near future. If that person is in the group that was targeted for negative messages, even a rookie lawyer could make a sound case before a jury that Facebook's researchers have blood on their hands.
surely people have committed sucide after using facebook even without this study. is facebook guilty of that, too?
you may argue that facebook was "trying to make people depressed" but that simply isn't true. what if showing more of my friends negative status updates actually _helps_ them? depressed people are shunned in our society; facebook gave a voice to the voiceless. that's wonderful!
Possibly, but only in the short run, as skewed perception of reality tends to have long-term negative consequences. Which is precisely one of the reasons why this kind of stuff is evil.
Author falsely assumes that people changes their sharing behavior due to changes in their mood.
More likely they just feel like "everyone's posting cats on Facebook, so that's a place for sharing cats, let me do too", or otherwise.
[+] [-] mabbo|11 years ago|reply
10,000 sufferers of Major Depressive Disorder
5,000 people with PTSD
4,000 bipolar disorder sufferers
1,700 schizophrenics
and a plethora of others with various mental disorders.
11/100,000 people commit suicide each year in America. How many were part of that treatment of the experiment, without consent or knowledge?
As a scientist, I'm fascinated by the research. As a human being, I'm horrified it was ever done.
http://www.nimh.nih.gov/health/publications/the-numbers-coun...
[+] [-] lilsunnybee|11 years ago|reply
If a friend made a post deemed negative, that would otherwise signaled to close friends to check up on that person and intervene if necessary, there's a very good chance these messages would have been filtered out in a system like this.
The potential affected population becomes much greater than the 155,000 being experimented upon then, and would have put a much greater number of people at risk of not having any close friends or others available to intervene, who would have otherwise been able to do so if the facebook algorithm hadn't been altered to follow this bullshit happiness metric for research purposes.
I really hope this becomes much bigger news, and some action is taken to ensure something like this doesn't happen again; but considering the lack of funding and attention given to mental health, especially in the work-till-you-drop business world, i sadly doubt it.
[+] [-] walterbell|11 years ago|reply
How would this be determined in offline experiments where people volunteered?
[+] [-] MarkPNeyer|11 years ago|reply
i have bipolar disorder, and i barely made it through an intense schizoaffective episode where i heard many voices, felt like my consciousness was splitting into multiple parts, and was terrified. this was just two months after my startup exited and i got nothing in 2012.
oh yeah - i'm also a startup founder, worked at uber, google and microsoft - now facebook. i don't speak for them. just me.
i'd like the world to understand me better. so much of what i've struggled with is mood-based.
you know that guy - http://www.losethos.com/ - i know _exactly_ what he means. i understand him when i hear him speak, and i feel bad for the guy. it's scary being where he is, "knowing" how powerful and right you must be, and knowing how people laugh at you behind our back - but you know you're right, that you're a conduit for god.
do you think it makes sense for him to be stuck like that? i sure as hell don't.
i'm sorry, i'm getting emotional here. this is hard for me.
i can't speak for my employer here. but i will tell you that your mindset of me as a victim who must not be upset - it can be more than a little offensive. fortunately through all of my experience dealing with these issues, i've learned how to better manage my emotional states, and i've also learned to see emotion as form of sensory input - like light and sound. i don't believe everything i hear, or see - why should i believe everything i think or feel?
if facebook was making up random shit that was negative and showing it to their users - that sucks. if they were making falsely positive posts - forging your friends activity - that also sucks.
but when they are selectively showing you portions of your friends' activity - something they were already doing anyway - it's wrong to say that they have "intentionally downgraded " my emotional state. if my friend says she's having a shitty day, she's not intentionally downgrading my state. she's having a bad day. if facebook hides that from me, are they making my day better? are they making her day worse? it's not really clear here. people know facebook adjusts their posts, and they did show that people who are exposed to negative content are less likely to post positive things. does this mean that the users are themselves feeling less positive? or are they just trying to keep with the tone of the social area they're in? it's not sure.
our culture does not understand emotion - i think this is a serious problem and we really need to do something about it. that templeOS guy is not enjoying life or functioning nearly as well as he could if he were not shunned for being so wildly antisocial. you know who else gets shunned for being antisocial? people who say things like 'i am sad', 'i feel lonely,' etc etc, in public.
i'm sorry for the tone of this - it's - it's hard for me to stay calm here.
but let's look at this as if emotion were the "same kind of thing" as light or sound. spreading a negative emotional reaction to this article and saying you are "horrified" that someone who was depressed had more people see their depressed posts - you're contributing to the problem.
it feels to me that emotion has some 'conservation' like properties; you can't diminish a lot of negative emotion at once. it also seems to 'move' places; negative emotion between people who interact seems to get pushed to scary places where lots of fear and hate are concentrated. in late 2012 i felt like all of the evil in the world, all of the hate was being shoved into me because i told the world i could take it, i told the world i didn't want them to hurt like i did.
i heard voices telling me to kill, and i wanted to kill myself rather than hurt someone else. i thouht icould take myself and the voices out with me - and then i'd heard the voice of my parents and loved ones calling to me.
when i see those shootings, where some loser with no friends and no hope goes out and kills a bunch of people - i feel like it happened to them, and rather than blame themselves for the horrible shit being pushed on them, they blamed the outside world.
but they're just as much victims - we're all victims - of our misunderstanding of emotion.
i'm sorry, i know you didn't ask for me to be upset at you, i know oyu mean well, i'm just.
i'm tired.
i want people to understand this stuff better because the better i've come to understand it, the better i've functioned in life. the ability to remain calm - an ability i have not exercised here because it impedes the ability to express genuine content - that ability is invaluable and a huge source of power.
energy moves from a heat reservoir, a bunch of pissed off angry furious people, to a cold rerservoir - a room full of sociopaths who use fear and anger to extract energy from a warmer place, the heat engine carnot-cycle of samsara and the transition from golden age to successsive yugas is just adiabatic/isothermic expansion and contraction of emtional content
shit this is making no sense.
so you see where i'm going - i'd like to understand this stuff better. i hope the study helps.
[+] [-] ajays|11 years ago|reply
FB _already_ filters out updates based on some blackbox algorithm. So they tweaked the parameters of that algorithm to filter out the "happier" updates, and observed what happens. How is this unethical? The updates were posted by the users' friends! FB didn't manufacture the news items; they were always there.
I detest FB as much as the next guy, but this is ridiculous.
[+] [-] masnick|11 years ago|reply
In this case, I don't think there was actual risk but just reading the PNAS paper it doesn't sound like the study went through the proper process. If it was reviewed by an IRB then it did go through the proper process and it's ethically sound, but a PR nightmare.
[+] [-] LoganCale|11 years ago|reply
[+] [-] michaelt|11 years ago|reply
For example, I once saw a British student e-mail out surveys about newspaper-buying habits; one was sent to an American academic, who replied to the student's supervisor saying the student should be thrown out of university for performing experiments on humans without ethical review board approval.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] TeMPOraL|11 years ago|reply
Facebook is in the unique position of possessing data that can be orders of magnitude more useful for social studies than surveys of randomly picked college students that happened to pass through your hallway. There's lot of good to be made from it.
But the bigger issue I see here is why it's unethical to "manipulate user emotions" for research, when every salesman, every ad agency, every news portal and every politician does this to the much bigger extent and it's considered fair? It doesn't make much sense for me (OTOH I have this attitude, constantly backed by experience, that everything a salesmen says is a malicious lie until proven otherwise).
[+] [-] DanAndersen|11 years ago|reply
My own way to reconcile this -- and I admit it's not a mainstream view -- is that advertisement and salesmanship should be considered just as unethical. I don't know how to quantify what "over the line" is, but it all feels like brain-hacking. Things like "The Century of the Self" suggest that in the past century or so we've become extremely good at finding the little tricks and failings of human cognition and taking advantage of vulnerabilities of our reasoning to inject the equivalent of malicious code. The problem is that when I say "we" I don't mean the average person, and there's an every-growing asymmetry. Like malware developers adapting faster than anti-malware developers, most people have the same level of defense that they always have had, while the "attackers" have gotten better and better at breaking through defenses.
Sometimes I'll see discussions about "what will people centuries from now think was crazy about our era?" and there's a part of me that keeps coming back to the idea that the act of asymmetrically exploiting the faults of human thinking is considered normal and "just the way things are."
[+] [-] walterbell|11 years ago|reply
The speech of one salesman/politician is different from thousands of machines impersonating human speech. Speech is protected in some countries. Fake speech (e.g. experiments/spam) decreases the trust of humans in all speech on the network.
In anyone wants to run large-scale experiments, they can:
(a) ask for volunteers
(b) pay for labor
Those who want to volunteer or microtask can (a) opt-in for money or fame, (b) disclose that they have opted-in, so that others know that their conversations are part of an experiment.
Why is advertising (e.g. a promoted tweet) differentiated from non-advertising? Why is "disclosure" required in journalism? Why are experiments differentiated from non-experiments?
The act of observation changes the behavior of the observed. If experiments are not disclosed and clearly demarcated, users must defensively assume that they may be observed/experimented upon, which affects behavior in the entire network. As a side-effect, this pollutes any conclusions which may be drawn from future "social" research.
[+] [-] zAy0LfpBZLC8mAC|11 years ago|reply
Facebook, though, pretends to be a communication service. The general expectation from a communication service is that it transmits information between users, thus users expect that what they see or hear coming out of the communication service is what the person at the other end has said/that the person at the other end will see what they say. A service that doesn't provide that isn't really a communication service at all, by definition - and lacking any other uses, isn't really good for anything. Just imagine a phone company with advanced speech recognition and synthesis software in the line that rewrites your conversations to be happier (or any other quality that the company or its customers prefer).
[+] [-] espeed|11 years ago|reply
I'm not defending Facebook or the experiment, but if you're going to call them out for "manipulating users' emotions without their knowledge", then you need to call out every advertising, marketing, and PR firm on the planet, along with every political talkshow, campaign, sales letter, and half-time speech...
[+] [-] B-Con|11 years ago|reply
Except that in every one of those examples, you know you're being fed a worldview slanted by the author for their purpose. They may try to appear unbiased, but you still know that they have a worldview that is in their best interest to share. That makes it completely different.
[+] [-] pain|11 years ago|reply
It could be wise to begin to share such emotional-information influences to equally let users and admins be so sensitive (and acknowledge its a real part of our systems).
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] kyro|11 years ago|reply
This isn't simply a matter of looking at metrics and making changes to increase conversion rates. The problem is that the whole of users have come to expect Facebook to be a place where they can see any and all of their friends' updates. When I look at an ad, I know I am being manipulated. I know I'm being sold something. There is no such expectation of manipulative intent of Facebook, or that they're curating your social feed beyond "most recent" and "most popular", which seemingly have little to do with post content and are filters they let you toggle.
What FB has done is misrepresent people and the lives they've chosen to portray, having a hand in shaping their online image. I want to see the good and the bad that my friends post. I want to know that whatever my mom or brother or friend posts, I'll be able to see. Someone's having a bad day? I want to see it and support that person. That's what's so great about social media, that whatever I post can reach everyone in my circle, the way I posted it, unedited, unfiltered.
To me this is a disagreement between what people perceive FB to be and how FB views itself. What if Twitter started filtering out tweets that were negative or critical of others?
[+] [-] doctorpangloss|11 years ago|reply
Would anyone want their emotions manipulated to be unhappy or unhealthy?
The corollary in a medical experiment would be, would a healthy person want to undergo an experiment that could make them sick?
Some people mentioned advertising as a counterpoint, that what Facebook does is not at all different from advertising's psychological manipulation. Well maybe some forms of advertising ought to be regulated too. Would a child voluntarily want their emotions manipulated by a Doritos ad to make the sicker or fatter?
Even if it's not known what the outcome is, the two points are:
(1) Facebook's various policies specify you will randomly participate in their studies, but
(2) It matters if an experimental outcome can harm you.
So even though you agreed to participate in experiments, you weren't told the experiments could hurt you. That is a classic medical ethics violation, and it ought to be a universal scientific ethics violation.
[+] [-] madeofpalk|11 years ago|reply
But that's the thing - Facebook have been manipulating your feed for years based on what it thought you would be interested in: favoring popular posts, posts from those you interact with frequently, posts that it thinks could be popular etc. There's always been 'most recent' which is a more accurate timeline, and as far as I know Facebook never manipulated that.
While I neither agree nor disagree with whether the study was right or not, it wasnt with this study that they 'misrepresented people and their lives'. Facebook has been doing that for years!
[+] [-] BrandonLive|11 years ago|reply
What Marc was getting at was that this is A/B testing. Everybody does A/B testing these days. Claiming it is "unethical" because the B group might be less happy than the A group is ridiculous. That's essentially the whole point of A/B testing. Try two things and see what makes your users happier, then you can do more of that.
[+] [-] _delirium|11 years ago|reply
This was explicitly a psychology study, performed by professional psychologists, for the purpose of collecting data publishable in a journal! The lead author (and the Facebook employee on the project), A.D.I. Kramer, has a PhD in social psychology. I think it's perfectly reasonable in that setting to expect the researchers to be following the norms of scientific ethics.
[+] [-] microcolonel|11 years ago|reply
Either way you're just going to listen to the least dissenting material you can find, might as well let them figure it out for you.
Maybe the difference is between moral good with questionable ethics, and moral mediocrity with unquestionable ethics.
[+] [-] masnick|11 years ago|reply
When you publish a paper, you are supposed to write in the body of the manuscript if it's been approved by an IRB and what their ruling was. I'm surprised it was published without this, even though it apparently was?
It's also appropriate to address ethical issues head-on in a paper about a study that may be controversial from an ethical perspective.
If it really was approved by an IRB, then the researchers are ethically in the clear but totally botched the PR.
If not, then I think the study was not ethical.
[+] [-] chmullig|11 years ago|reply
[+] [-] mryall|11 years ago|reply
With this experiment, Facebook are modifying the news feeds of their users specifically to affect their emotions, and then measure impact of that emotional change. The intention is to modify the feelings of users on the system, some negatively, some positively.
Intentional messing with human moods like this purely for experimentation is the reason why ethics committees exist at research organisations, and why informed consent is required from participants in experiments.
Informed consent in this case could have involved popping up a dialog to all users who were to be involved in the experiment, informing them that the presentation of information in Facebook would be changed in a way that might affect their emotions or mood. That is what you would expect of doctors and researchers when dealing with substances or activities that could adversely affects people's moods. We should expect no less from pervasive social networks like Facebook.
[+] [-] azakai|11 years ago|reply
Every single time Facebook changes anything on their site it "manipulates users' emotions". Show more content from their friends? Show less? Show more from some friends? Show one type of content more, another less? Change the font? Enlarge/shrink thumbnail images? All these things affect users on all levels, including emotionally, and Facebook does such changes every day.
Talking about "informed consent" in the context of a "psychological experiment" here is bizarre. The "subjects" of the "experiment" here are users of Facebook. They decided to use Facebook, and Facebook tweaks the content it shows them every single day. They expect that. That is how Facebook and every other site on the web (that is sophisticated enough to do studies on user behavior) works.
If this is "immoral", then an website outage - which frustrates users hugely - should be outright evil. And shutting down a service would be an atrocity. Of course all of these are ludicrous.
The only reason we are talking about this is because it was published, so all of a sudden it's "psychological research", which is a context rife with ethical limitations. But make no mistake - Facebook and all other sophisticated websites do such "psychological research" ALL THE TIME. It's how they optimize their content to get people to spend more time on their sites, or spend more money, or whatever they want.
If anyone objects to this, they object to basically the entire modern web.
[+] [-] TeMPOraL|11 years ago|reply
[+] [-] ameza|11 years ago|reply
[+] [-] ianstallings|11 years ago|reply
Also, the comparison to an A/B test is a false one. This is specifically to alter the moods of the user and test the results in a study, not to improve the users experience or determine which app version works better.
Regarding the study mentioned above: http://www.newyorker.com/online/blogs/elements/2013/09/the-r...
[+] [-] resdirector|11 years ago|reply
Hang on. Wasn't the experiment to see whether users would post gloomier or happier messages respectively? This very different from intentionally making people sad.
[+] [-] mullingitover|11 years ago|reply
[+] [-] mullingitover|11 years ago|reply
[+] [-] MarkPNeyer|11 years ago|reply
you may argue that facebook was "trying to make people depressed" but that simply isn't true. what if showing more of my friends negative status updates actually _helps_ them? depressed people are shunned in our society; facebook gave a voice to the voiceless. that's wonderful!
[+] [-] danso|11 years ago|reply
https://news.ycombinator.com/item?id=7956470
[+] [-] dang|11 years ago|reply
[+] [-] ispolin|11 years ago|reply
[+] [-] zAy0LfpBZLC8mAC|11 years ago|reply
[+] [-] DanBC|11 years ago|reply
[+] [-] deepsun|11 years ago|reply
[+] [-] nichodges|11 years ago|reply
I wonder if Facebook plans on alerting subjects of this experiment to their participation?
[+] [-] jevgeni|11 years ago|reply
[+] [-] falconfunction|11 years ago|reply
[+] [-] onewaystreet|11 years ago|reply