As long as the user has complete control over the agent, I am all in favor of screening inbound messages and dropping anything I don't like (hate speech, but also paid speech). This is consistent with my view on "consensual communication" - it is startling that we allow ourselves to force information into each others minds, but we don't allow ourselves to shove food into each others mouths, without permission!
And yes, if this function came under central control, you might have yourself a very merry dystopia. You might have a dystopia anyway if people can set their filter not just on hate speech and paid speech, but on any predicate they define, including anything that disagrees with them. It's the ultimate information silo, and would no doubt be deadly to any democracy. It might even be deadly to any form of government, since it would tend to seriously weaken the intellect of the population.
> it is startling that we allow ourselves to force information into each others minds, but we don't allow ourselves to shove food into each others mouths, without permission!
I'm sorry, this whole concept irks me in a "not even wrong" way. It's difficult to even begin to explain how wrong I feel this sentiment is. Let me try.
First, it makes no sense to draw an equivalence between taking in information and physically assaulting someone with food. Eating is an active thing. It's something you do. You can be in the same room as a piece of cake and not eat it. You can't choose not to sense something. There is not even an option to consent to communication with someone. If you're near someone and they talk, they've communicated with you, whether you wanted them to or not. (Assuming you speak the same language).
It's the natural state of a person (all living organisms, actually) to be passively receiving as much information as possible all the time. This is because survival is dependent on being as aware as possible of your surroundings so you can detect both threats and opportunities.
Deliberately choosing to limit the type of information you're able to perceive makes as little sense to me as deliberately trying to reduce the number of colors you can see. Or making it so you can feel only pleasure but not pain. There are people who can't feel pain, it's a debilitating disorder that makes it very difficult to navigate the world.
We need to be exposed to uncomfortable ideas just as much as we need to be able to feel pain. Removing any information you "don't like" from your awareness would be about as healthy for your mind as eating only cake and potato chips would be for your body.
And who gets to decide what's 'hate speech'? Censorship is basically the same as propaganda - changing the narrative to influence opinions. One day obvious hate speech is banned, the next day any dissenting opinions are 'hate speech'.
We always must ask this question: WHO ARE THE DECIDERS?
I refuse to believe the challenges we face now are unique to us. We made the decision as a country not to trade comfort for liberty and we must not go back on that.
The tools that are needed here would provide the ability for each of us to screen out whatever we don't care to see, whether it be racist speech or just people that won't shut up about crossfit.
And since it's just me controlling my filter, I could easily adjust the level of filtering. Sometimes it's useful to know what people outside my Overton Window are talking about.
Typically the democratically elected legislature defines hate speech through legislation and the penalties, if any. The executive branch will generally seek enforcement of the legislatures laws through the courts. Then the last check/balance on power comes by way of the judiciary which may or may not include finders of fact consisting of a jury made up of the very members of the electorate coming full circle.
>Censorship is basically the same as propaganda - changing the narrative to influence opinions.
By that measure almost all communication is propaganda, for example, was you comment intended to control the narrative to influence opinions (i.e. censorship is the same as propaganda)? And are you suggesting the censors be censored?
Part of the problem is that social media is both personal interaction and a form of publication.
If you view social media as personal interaction then ask yourself how you would react to statements in real life. You would shut someone down if they threatened you or people you care about. I have seen people go so far as to threaten people's lives, but more veiled threats are also in the social space.
Call it censorship if you will. The definition of censorship I will leave to pundits. The pundits have decided to define social media as publication only.
it's a terrible idea based on the belief that "AI" can solve anything. "all we need is better AI", as Zuck tried to convince Congress of. It's not a technical problem[1], and any technical solutions are bound to be riddled with trust issues.
[1] imo it's a sociological problem. we should be connecting irl and not be online which just leads to isolated people ending up alienated and indoctrinated with hate. instead of only "hunting down" hate speech we need to address the fact that people join such groups in the first place. There are some really hard questions when it comes to dealing with white-power groups: https://twitter.com/EmilyGorcenski/status/120998572902520422... and unless we can find a solution to them people will continue getting clicks for snake-oil solutions such as "better AI" (AI really just stands as an excuse to continue the current revenue model of information extraction and surveillance capitalism)
"Hate speech" seems to be imbued with somewhat religious meaning, since most conversations about it make huge assumptions about its impact, frequency, and scope.
I noticed several years ago that you can mentally replace terms like "problematic" with "sinful" and "hate speech" with "blasphemy" without losing any meaning in most instances. They basically convey nothing more than a subjective moral judgment informed by group affiliation and puritan ideology.
"Racist/sexist/homophobic" still convey a little bit of information since they hint at a way in which something might be genuinely bad. But their definitions have been so irresponsibly broadened that you have to take it on a case-by-case basis whether the label even means anything let alone whether it's accurate.
This is a bad problem and everyone should worry about it, because a meaning vacuum is like a power vacuum. Even people who don't care about cis white men getting verbally abused need to realize that when you publish insane histrionics about OK signs and cartoon frogs, you generate confusion and doubt that actual white supremacists can use for cover. Extremism begets extremism.
Communication should be consensual. If I want to pre-emptively block some kind of content, or delegate such authority to some trusted third party, that's great. It's what we do with ad blockers all the time, which are less controversial here for some reason than self-defined "safe spaces."
If anything, the ability to block content as opposed to individual users would lead to more exposure to new viewpoints. For instance, I think Rod Dreher has lots of interesting things to say. But it seems like a quarter of his posts are rants about trans people using bathrooms, which I just don't have time for. And as a result I mostly skip everything he writes, including the good stuff. But if we built tools that emphasized filtering on content instead of people, it would simultaneously let us get presented with arguments we're actually interested in having and also not prejudge content by association with other content that shares the same author.
We accept ad blockers more readily because it's commonly accepted that ads are a different sort of speech, in an ethical way. It's not somebody telling you something; it's a machine trying to sell you something.
This is already done on some sites through shadow-banning.
On big social media sites, people will likely work around the restrictions. I can think of hundreds of ways to bypass such bans. GIFs, different languages and character sets, slang, linking blogs rotating domain names, rot13, base64, Morse code browser plugin, to name a few. Those that bother don't may find themselves in echo chambers.
Do people still create new slang or code speak these days?
One of my favorites, but probably not useful on facebook:
I'm glad they are suggesting something short of censorship.
I've always supported the idea of letting people choose what they want to see. If moderators could tag posts with tags like 'spam', 'nsfw', 'graphicviolence', 'racistagainstraceX', etc... and if people could then choose what kinds of posts they want to see... and if anyone could be a moderator... and if people could then subscribe to only the moderators they trust... problem solved.
I just can't figure out how to make such a system scale. I've tried a few architectures but I'm still looking for the breakthrough to make it scale.
The problem today is if you don't like Twitter moderators you go to Mastodon or to Gab, and the community fractures into bubbles, and that drives people further apart. I think fixing moderation is going to be the next big thing.
A lot of the comments here seem to be missing the point of the article. Hate speech wouldn't be removed, it would just require an additional click or two to take a look at it. I like this because it gives the user a choice.
Lets say you are a black man who just wants to scroll through Facebook at the end of the day to see what your friends are up to, without having to see racist comments that friends of friends have left. You would see the warnings that hate speech might exist behind the filter, and, since you are trying to relax at the end of the day, you could make the conscious decision that looking at that stuff wouldn't be good for you.
But then, some other time, like maybe early the next day, you could make the conscious decision to click through to see what the person said, and tell them off, or, if the content was misflagged, let the algorithm know.
The current state is that we remove the content completely and pretend like people aren't racist/sexist/phobic, or we leave the content up and allow people to get dragged into flamewars at all hours of the day. This new proposed tech would be akin to HN's option to "showdead," except with more context about what you're opting into potentially seeing.
>Definitions of hate speech vary depending on nation, law and platform
according to the Russian laws and their application the speech critical toward the government falls under "extremist speech spreading/inciting hate toward a specific social group" (in this case the social group is the government).
So contained like a computer virus, sounds good, I suppose whatever these computer virus things are they're not a big problem for people or anything.
anyway one obvious way this is different - when the popup says this might be a dangerous file continue users click yes without thinking about it because they want to see the funny thing that Bob sent them. When the popup says this might be homophobic hate speech they will click no if they don't want homophobic hate speech and yes if they do - because some people do want it, which is part of the problem.
The problem is only partially that gay people have to read homophobic hate speech, the other insoluble part of the problem is that there are non gay people who are interested in reading it, and reading more of it, and more virulent hate speech, and then going out and beating up a gay person.
Anyway researches say that two totally unlike each other things can be handled in the same way because they're here to help https://xkcd.com/1831/
[+] [-] javajosh|6 years ago|reply
And yes, if this function came under central control, you might have yourself a very merry dystopia. You might have a dystopia anyway if people can set their filter not just on hate speech and paid speech, but on any predicate they define, including anything that disagrees with them. It's the ultimate information silo, and would no doubt be deadly to any democracy. It might even be deadly to any form of government, since it would tend to seriously weaken the intellect of the population.
[+] [-] imgabe|6 years ago|reply
I'm sorry, this whole concept irks me in a "not even wrong" way. It's difficult to even begin to explain how wrong I feel this sentiment is. Let me try.
First, it makes no sense to draw an equivalence between taking in information and physically assaulting someone with food. Eating is an active thing. It's something you do. You can be in the same room as a piece of cake and not eat it. You can't choose not to sense something. There is not even an option to consent to communication with someone. If you're near someone and they talk, they've communicated with you, whether you wanted them to or not. (Assuming you speak the same language).
It's the natural state of a person (all living organisms, actually) to be passively receiving as much information as possible all the time. This is because survival is dependent on being as aware as possible of your surroundings so you can detect both threats and opportunities.
Deliberately choosing to limit the type of information you're able to perceive makes as little sense to me as deliberately trying to reduce the number of colors you can see. Or making it so you can feel only pleasure but not pain. There are people who can't feel pain, it's a debilitating disorder that makes it very difficult to navigate the world.
We need to be exposed to uncomfortable ideas just as much as we need to be able to feel pain. Removing any information you "don't like" from your awareness would be about as healthy for your mind as eating only cake and potato chips would be for your body.
[+] [-] ars|6 years ago|reply
In hindsight people killing it as a censorship tool was so unfortunate. The whole point of PICS is YOU pick your rating agency.
[+] [-] OneGuy123|6 years ago|reply
They will understand it only when it's too late.
[+] [-] commandlinefan|6 years ago|reply
[+] [-] Mikeb85|6 years ago|reply
[+] [-] 3fe9a03ccd14ca5|6 years ago|reply
I refuse to believe the challenges we face now are unique to us. We made the decision as a country not to trade comfort for liberty and we must not go back on that.
[+] [-] downerending|6 years ago|reply
The tools that are needed here would provide the ability for each of us to screen out whatever we don't care to see, whether it be racist speech or just people that won't shut up about crossfit.
And since it's just me controlling my filter, I could easily adjust the level of filtering. Sometimes it's useful to know what people outside my Overton Window are talking about.
[+] [-] throwaway_tech|6 years ago|reply
Typically the democratically elected legislature defines hate speech through legislation and the penalties, if any. The executive branch will generally seek enforcement of the legislatures laws through the courts. Then the last check/balance on power comes by way of the judiciary which may or may not include finders of fact consisting of a jury made up of the very members of the electorate coming full circle.
>Censorship is basically the same as propaganda - changing the narrative to influence opinions.
By that measure almost all communication is propaganda, for example, was you comment intended to control the narrative to influence opinions (i.e. censorship is the same as propaganda)? And are you suggesting the censors be censored?
[+] [-] maire|6 years ago|reply
If you view social media as personal interaction then ask yourself how you would react to statements in real life. You would shut someone down if they threatened you or people you care about. I have seen people go so far as to threaten people's lives, but more veiled threats are also in the social space.
Call it censorship if you will. The definition of censorship I will leave to pundits. The pundits have decided to define social media as publication only.
[+] [-] DyslexicAtheist|6 years ago|reply
[1] imo it's a sociological problem. we should be connecting irl and not be online which just leads to isolated people ending up alienated and indoctrinated with hate. instead of only "hunting down" hate speech we need to address the fact that people join such groups in the first place. There are some really hard questions when it comes to dealing with white-power groups: https://twitter.com/EmilyGorcenski/status/120998572902520422... and unless we can find a solution to them people will continue getting clicks for snake-oil solutions such as "better AI" (AI really just stands as an excuse to continue the current revenue model of information extraction and surveillance capitalism)
[+] [-] collsni|6 years ago|reply
[+] [-] dwnvoted2hell|6 years ago|reply
[deleted]
[+] [-] cal5k|6 years ago|reply
[+] [-] homonculus1|6 years ago|reply
"Racist/sexist/homophobic" still convey a little bit of information since they hint at a way in which something might be genuinely bad. But their definitions have been so irresponsibly broadened that you have to take it on a case-by-case basis whether the label even means anything let alone whether it's accurate.
This is a bad problem and everyone should worry about it, because a meaning vacuum is like a power vacuum. Even people who don't care about cis white men getting verbally abused need to realize that when you publish insane histrionics about OK signs and cartoon frogs, you generate confusion and doubt that actual white supremacists can use for cover. Extremism begets extremism.
[+] [-] scarmig|6 years ago|reply
If anything, the ability to block content as opposed to individual users would lead to more exposure to new viewpoints. For instance, I think Rod Dreher has lots of interesting things to say. But it seems like a quarter of his posts are rants about trans people using bathrooms, which I just don't have time for. And as a result I mostly skip everything he writes, including the good stuff. But if we built tools that emphasized filtering on content instead of people, it would simultaneously let us get presented with arguments we're actually interested in having and also not prejudge content by association with other content that shares the same author.
[+] [-] antepodius|6 years ago|reply
[+] [-] LinuxBender|6 years ago|reply
On big social media sites, people will likely work around the restrictions. I can think of hundreds of ways to bypass such bans. GIFs, different languages and character sets, slang, linking blogs rotating domain names, rot13, base64, Morse code browser plugin, to name a few. Those that bother don't may find themselves in echo chambers.
Do people still create new slang or code speak these days?
One of my favorites, but probably not useful on facebook:
[+] [-] mikedilger|6 years ago|reply
I've always supported the idea of letting people choose what they want to see. If moderators could tag posts with tags like 'spam', 'nsfw', 'graphicviolence', 'racistagainstraceX', etc... and if people could then choose what kinds of posts they want to see... and if anyone could be a moderator... and if people could then subscribe to only the moderators they trust... problem solved.
I just can't figure out how to make such a system scale. I've tried a few architectures but I'm still looking for the breakthrough to make it scale.
The problem today is if you don't like Twitter moderators you go to Mastodon or to Gab, and the community fractures into bubbles, and that drives people further apart. I think fixing moderation is going to be the next big thing.
[+] [-] novia|6 years ago|reply
Lets say you are a black man who just wants to scroll through Facebook at the end of the day to see what your friends are up to, without having to see racist comments that friends of friends have left. You would see the warnings that hate speech might exist behind the filter, and, since you are trying to relax at the end of the day, you could make the conscious decision that looking at that stuff wouldn't be good for you.
But then, some other time, like maybe early the next day, you could make the conscious decision to click through to see what the person said, and tell them off, or, if the content was misflagged, let the algorithm know.
The current state is that we remove the content completely and pretend like people aren't racist/sexist/phobic, or we leave the content up and allow people to get dragged into flamewars at all hours of the day. This new proposed tech would be akin to HN's option to "showdead," except with more context about what you're opting into potentially seeing.
[+] [-] misterspaceman|6 years ago|reply
I think "spam" is a better analogy than a computer virus.
[+] [-] trhway|6 years ago|reply
according to the Russian laws and their application the speech critical toward the government falls under "extremist speech spreading/inciting hate toward a specific social group" (in this case the social group is the government).
[+] [-] tus88|6 years ago|reply
[+] [-] wallace_f|6 years ago|reply
[deleted]
[+] [-] ailideex|6 years ago|reply
[+] [-] moomin|6 years ago|reply
[deleted]
[+] [-] bryanrasmussen|6 years ago|reply
anyway one obvious way this is different - when the popup says this might be a dangerous file continue users click yes without thinking about it because they want to see the funny thing that Bob sent them. When the popup says this might be homophobic hate speech they will click no if they don't want homophobic hate speech and yes if they do - because some people do want it, which is part of the problem.
The problem is only partially that gay people have to read homophobic hate speech, the other insoluble part of the problem is that there are non gay people who are interested in reading it, and reading more of it, and more virulent hate speech, and then going out and beating up a gay person.
Anyway researches say that two totally unlike each other things can be handled in the same way because they're here to help https://xkcd.com/1831/
[+] [-] rudolfwinestock|6 years ago|reply
Therefore, we can expect all kinds of bad actors to take advantage of that.
Sooner or later, Internet outrage mobs will form around vulnerable people whose speech wasn't not-“hate speech” enough.
I'm sure that plenty of people, around here, can dream up some more nightmare scenarios.