top | item 23550215

Justice Department to propose limiting internet firms’ Section 230 protections

132 points| _sfvd | 5 years ago |wsj.com

189 comments

order

throwawaygh|5 years ago

I'm generally sympathetic to the idea that Section 230 protections should come with some sort of obligation to allow free speech.

However, the actual policy proposals for replacing Section 230 are all outright dystopian. Josh Hawley, in particular, is NOT a free speech advocate. His problem with Facebook/Tiwtter is perceived liberal bias, and the alternatives to Section 230 that he suggests are 100% about wrestling editorial oversight away from one class (tech CEOs) and then giving it to another (a politically-appointed board).

Does anyone have a good proposal for how to go about reforming Section 230 in a way that's workable and values free speech?

jasode|5 years ago

>Section 230 protections should come with some sort of obligation to allow free speech. [...] Does anyone have a good proposal [...] and values free speech?

Nobody has a good proposal because every discussion about the idealism of "values free speech" is always hiding the true difficulty: nobody wants to be forced to pay for others' undesirable speech.

E.g. Youtube can't be a "free speech" platform because advertisers have free will and can choose to not pay for it. (Previous comment about Adpocalypse: https://news.ycombinator.com/item?id=23259087)

Always mentally translate "create a website that allows free speech" into "create a website that forces others to always pay for undesirable speech they don't agree with" -- and you will see that's a virtually impossible dream to accomplish. There is no broadcasting medium (including websites) in any country that doesn't have interference and pressure to remove/ban content via consumer boycotts, advertisers, subscribers, business judgement, or government officials.

Websites have the hard reality of requiring cpu/disk/bandwidth and they all cost money and that's the lever used by others that keeps "absolute free speech" from getting realistically implemented.

thoughtstheseus|5 years ago

Make it all dumb pipes and make users responsible for regulating what they see/hear. Make a market for filtering content, one great filter across a platform is not flexible.

Require platforms over a certain size to provide real-time data accessibility across platforms. Facebook and Twitter are monopolies by virtue of market position, anyone can build a platform that is functionally the same. Create competition here.

danShumway|5 years ago

If you're looking for an alternative take, check out some of Cory Doctorow's writing on this. His position is that forcing platform neutrality is less important when platforms don't have a monopoly over communication.

Different people have come up with different plans about how you could address tech monopolies, with varying degrees of extremity:

- Splitting up companies that control entire vertical slices of a market. Warren in particular was campaigning pretty hard on this, especially in regards to Amazon/Apple app stores.

- Forcing companies to allow data exports by consumers, and specifically to allow automated data exports. For example, Facebook would need to allow you to access an API to pull your data, so you could plug that API into a competitor instead of manually downloading everything.

- Weakening Computer Fraud and Abuse laws around site scraping and adversarial interoperability.

- Adding additional exceptions to the DMCA around interoperability. For example, allowing companies to break Kindle DRM for the purpose of moving books to a competing service if Amazon didn't provide a way for them to migrate books on its own.

- Forcing certain data formats to be standardized, or requiring standardized API layers on top of services.

There's a lot of debate in those areas about how far is too far, and what counts as a natural monopoly, and what negative side effects might exist for particular strategies. But, the thread running through all of them is that Section 230 is fine, awesome even. There's no need to get rid of it, 99% of the time we want moderation on most of our platforms.

Platform censorship is really only a problem when consumers don't have the ability to easily switch platforms/hosts, and in that case we should break the monopolies, not the Right to Filter[0]. You see people complain about censorship on Twitter, you don't see as many people complain about censorship on Mastodon, because on Mastodon you can set up your own server if you really need to. One of the biggest points of federated services is to allow communities to choose how aggressive they want to be about moderation.

[0]: https://anewdigitalmanifesto.com/#right-to-filter

bilbo0s|5 years ago

I don't have a good proposal really, but I agree that "politically appointed board" is exactly the worst thing we could have. That's the point where true free speech advocates will have suffered total defeat.

olliej|5 years ago

Define free speech in a way that allows a platform to ban offensive content, while requiring them to publish all content.

Also this is contextually a clear retaliation for speech that the government does not like, and their arguments are pretextual.

But also if a platform loses 230 protectIon if it restrict political opinions then sites would need to leave racist and homophobic comments up, personal attacks against the authors, etc. Because if they lose 230 protection they become directly liable for content on their site if the filter any of it.

That was the whole point of section 230 - sites have a legitimate reason to want to stop arbitrary content being hosted by them, but they only had the “i’m just a dumb pipe” defense as long as they left everything up. Preventing that is literally the reason section 230 exists.

But here we have a president who doesn’t like one platform’s content moderation policies, as has decided to rewrite the law in order to make that moderation illegal.

It is clearly retaliatory, and it is clearly with the intent of restricting the speech of those entities.

ch4s3|5 years ago

> some sort of obligation to allow free speech

Isn’t that the opposite of what the text of the law says? Doesn’t it provide for protection when moderate content that “one may find objectionable”, which could basically be anything.

pwdisswordfish2|5 years ago

To me, the question is whether the web is a something people use through a middleman, i.e., someone else's website like Mark Zuckerberg's, versus a thing that we use directly, i.e., having our own websites. If we follow the later thinking, then of course we are personally responsible for what content we place on the website.

In either case, the web in its design is still a "public place" where "free speech" can occur, where anyone who is connected to the internet has the potential (setting aside issues of state censorship) to communicate, via a public website, anything to anyone, anywhere in the world.

Section 230 was reputed to be passed in response to a lawsuit against Prodigy, an online subscription service, which technically, IMO, was not the same as the emerging "web". IMO, services like Prodigy, Compuserve, America Online, etc. were walled gardens that could exist outside of the web. Rightly or wrongly, I always viewed Section 230 as protecting ISP's from litigation arising out of the content people included on their websites, not as protecting websites from litigation arising out of the publication of the content. It is up to the website owner to remove offending content, not the ISP to block access to it. This makes practical sense. We wanted ISPs to stay in businesss.

As crazy as it may seem to consider messing with Section 230, there is certainly an argument that the protection it affords has been usurped in ways never anticipated, by enormous "communal" websites larger than anyone could have imagined. When someone's website has billions of pages, comprising submissions from the general public, it becomes impractical to remove offending content. I doubt Section 230 was intended to address this problem, to keep a small number of individual websites in business and ensure the creation of a small number of advertising services billionaires.

ensignavenger|5 years ago

The whole point of Section 230 is to allow digital communications services to moderate their platforms without incurring liability for the things their users say. If you want to stop the moderation, all you would need to do is completely repeal Section 230- as it no longer serves any purpose under such a system.

elliekelly|5 years ago

I like HN’s approach and wish more platforms would follow a similar format. As far as I know, nothing is ever “removed” from the site - it’s just greyed out or hidden by default. Anyone who wants to read the bothersome comments can flip the switch to see them but no one can reply to them which seems like a really effective approach to me.

If a “censored” tweet couldn’t be shared/retweeted/replied to but was still available for anyone who wanted to seek it out then the idea (however distasteful) hasn’t been censored strictly speaking but it also hasn’t been amplified. I’d prefer a compromise that leaves control over acceptable content in the hands of the platform owner or the users rather than the government.

s_y_n_t_a_x|5 years ago

I see you chose to attack the person, not the proposal.

You are wrong. The bill does not designate a political board, it requires tech companies that have over $30 million U.S. users per month and an annual income of over $1.5 billion, to publish all of their content moderation policies. Users who charge that the companies are not implementing content moderation policies fairly would be able to sue for $5,000 plus attorney fees.

I think it's reasonable for these social media behemoths to post their mod logs.

I'd even like to see sites like HNs do it. Lobsters does: https://lobste.rs/moderations

If you have a specific gripe with this, let's discuss the legal.

I really don't see how GP is currently top comment.

Forcing giant social media companies to publish their content moderation is transferring power from the tech ELITE to the public. No political committee is in charge, the company will be forced to be published their logs, the courts can be used when users think companies are still acting in bad faith and not properly publishing their moderation logs.

PSA: READ THE BILL, IT'S SIX PAGES!!!

https://www.hawley.senate.gov/sites/default/files/2020-06/Li...

geofft|5 years ago

I'm not sure I agree with that framing of the relationship between Section 230 and free speech.

For reference, here's the law:

> (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

> (2) No provider or user of an interactive computer service shall be held liable on account of— (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

Section 230 was written to solve a very specific problem: Prodigy tried to moderate content on their site, and when someone posted libelous content and they didn't remove it, Prodigy was held legally responsible. CompuServe did not moderate content, and when someone posted libelous content, CompuServe was not held legally responsible. There was a perception that this was a counterintuitive result, and so Section 230 patched over it.

This has nothing to do with the ideological content of the communications. The messages in both cases were already unlawful because they were libelous - the question is whether CompuServe and Prodigy bore any liability (i.e., any obligation to not republish it), or just the end user.

Also, as written, Section 230 does not create an obligation to do anything. You don't have to moderate obscene, lewd, etc. content. You can choose not to moderate anything. The law simply says, 1, you the website operator aren't responsible for what people post, and 2, you don't gain any additional liability if you choose to moderate these things. It doesn't create any liability for not moderating them. The perception (which seems to have been empirically correct) is that Prodigy's approach would be more popular in the market than CompuServe's, and so the law should not create a legal incentive to act like CompuServe. The new law simply removed that incentive; it did not create a legal incentive to act like Prodigy.

The results of the two cases are only counterintuitive if you believe it is good for society for service providers to proactively moderate speech that is already illegal and err on the side of over-moderating. I don't think that belief is easy to reconcile with a strong pro-free-speech view - you're trusting a platform to be making decisions that would otherwise be made by courts, and you don't have nearly the representation/recourse/etc. you do with the legal system, if they decide to moderate you.

In particular, adding an obligation to protect free speech means that providers can only moderate content if they're confident it would result in legal liability. If they're not sure (suppose that, to pick a recent example, someone says that J. K. Rowling "cannot be trusted around children" - is this libelous, or a constitutionally-protected opinion?), they should err on the side of not moderating. But that matches the status quo ante Section 230. If you think that forums should err on the side of under-moderating, then it was perfectly fine to be in the legal situation where Prodigy's approach was riskier than CompuServe's.

Note also that neither of these scenarios does anything to discourage people from running forums where they tightly control what is said (ideologically or otherwise). If I want to host a personal blog with only my own posts, I can do that today, I could do that before Section 230, and I can do that essentially regardless of anyone's proposals (because I have a First Amendment right to say what I want and only what I want). If I want to invite my friends and only my friends to comment, I can do that too. If I want to invite the entire world to comment and I screen comments before posting, I can do that too (I also have a First Amendment right to free association). I'm still liable for unlawful posts (from libel to copyright infringement to whatever else), but if I'm willing to tightly moderate content, that's okay.

Another pro-free-speech opinion here, by the way, is that the real problem is with libel laws, and neither CompuServe nor Prodigy should have been held liable because the speech shouldn't have been illegal in the first place. This is entirely orthogonal to the "free speech" concern of perceived ideological bias.

It's only in the weird intersection of all of these things that the framing of Section 230 and ideological bias seems to make sense - you'd have to take the anti-free-speech view that ruinous penalties for libel are good, and then carve out an anti-free-speech exception that says that if you choose not to exercise your right to say what you want or associate with who you want, libel laws don't apply to you. And then, somehow, the two anti-free-speech approaches cancel out and turn into a free speech view - platforms are obligated to be non-ideologically-biased (in a sense defined by the government) for fear of arbitrary civil penalties.

(By the way, any free-speech reform to Section 230 really should start with repealing 230(e)(5), where FOSTA/SESTA partially removed Section 230's protections so that platforms became responsible for messages posted by users about "the promotion or facilitation of prostitution.")

christkv|5 years ago

Just change the wording to illegal speech instead of vague definition like indecent speech.

dlp211|5 years ago

[deleted]

SN76477|5 years ago

> His problem with Facebook/Tiwtter is perceived liberal bias

This seems to be because they live in a bubble where everyone agrees with them. But when they look at the real world they do not see the same. giving them the perception of bias, but there is none. They simply have an unpopular opinion.

GoblinSlayer|5 years ago

A politically-appointed board won't do stellar moderation, but it sure will prevent the worst form of moderation that CEOs do.

bcrosby95|5 years ago

Be careful of what you ask for.

Section 230 exists because the courts punished Prodigy because they tried to moderate their forums but did it imperfectly, but didn't punish CompuServe because they let anything go. The idea is to allow imperfect moderation in addition to both zero and perfect moderation.

The internet without section 230 isn't a bastion of internet freedom. It's 4chan and 8chan. It's a shithole.

geofft|5 years ago

More precisely, the internet without section 230 is two things: it's 4chan and 8chan on one side and tightly moderated corporate-run comment sections on the other (because you need extremely proactive moderation to avoid liability for things people post). You'll still have social media, because the world loves it, but everything will be reviewed by a compliance team at a big tech company instead of being available immediately. Smaller sites won't be able to staff a proper review team - you can still run personal blogs and let trusted friends comment, but you can't do things like run a Mastodon or a phpBB open to the public if you want to do any moderation at all (and if you don't do any moderation, 8chan will raid you).

themacguffinman|5 years ago

I doubt that it will devolve into 4chan & 8chan which are abhorrent to common people, there's no money in doing that.

The internet will find a different way to appeal to the mainstream, probably by becoming more similar to cable TV & Netflix & Disney: practically eliminate amateur content and stick to professional, big budget productions.

Consultant32452|5 years ago

I'm okay with imperfect moderation. What I'm not okay with is backdoor untracked political contributions under the guise of imperfect moderation. It feels, to me, that Twitter and Google have given Billions of dollars worth of political censorship/promotion/search bias. Let's get the FEC involved so we can measure and track this political spending.

prepend|5 years ago

I don’t think everything devolves to 4chan. But if I had to choose, I suppose I’d rather have 4chan than Facebook. But I hope we never have to choose between those extremes.

ooobit2|5 years ago

[deleted]

akersten|5 years ago

Can someone who supports "let's hold internet platforms responsible for what their users do on their platform" explain how that's any different than "let's hold gun manufacturers responsible for what users do with their guns?"

I fail to see a difference between the two, and think both are untenable fantasies.

vnchr|5 years ago

Internet platforms maintain control over their system whereas gun manufacturers give away control to gun buyers.

The gun manufacturer ceases to maintain control and cannot be assigned responsibility after the sale of the good.

arkades|5 years ago

You don't see the difference between "let's hold companies responsible for what people do as part of utilizing their services, while utilizing their services" and "let's hold companies responsible for what people do with an item they have purchased once entirely out of the supervision of that company, without any possible oversight or control"?

I can't hold a skateboard co. responsible for what people do with skateboards they've purchased. I can most certainly hold a skate park responsible for what happens in the skate park.

AzzieElbab|5 years ago

It is more of a pre-elections warning shot. I expect a lot of negativity from both sides in November. So politicians cant allow platforms interfere.

belorn|5 years ago

Gun manufacturers regulations and internet platform regulations are quite two different subjects with very little overlapping.

If a platform controls and exercise editorial control when something is said, what is said, and who may speak and who may listen, then it may be useful to hold that platform liable. It is about intent, control and power.

Gun manufacturers regulations however is full of rules that is about protecting society and international agreements. Selling guns to countries currently at war is problematic, so we hold those manufacturers responsible if they try to profit from running guns.

0134340|5 years ago

Really, even though the platform you're using voluntarily chooses what you're allowed to post, as with almost all online platforms?

Besides that, you're framing this wrong. If I host a platform as I host you in my house, I have a right and a duty to make sure you aren't committing illegal actions within my domain. This is a fairly universal law, written and unwritten, that who and what you host in your domain is your responsibility. Why should a few privileged platforms get a free pass?

throwaway_USD|5 years ago

>"let's hold gun manufacturers responsible for what users do with their guns?" I fail to see a difference between the two, and think both are untenable fantasies.

Sometimes the arms dealers (sellers not manufacture) are liable for the actions of the gun owners.

There is no shortage of cases, I searched Walmart (because Walmart scale), but some examples:

1. Walmart Settles Lawsuit for Selling Gun Used in Murder by Neo-Nazi (https://blogs.findlaw.com/injured/2018/11/walmart-settles-la...)

2. Wal-Mart sued over sale of bullets used in Pennsylvania murders (https://www.reuters.com/article/us-pennsylvania-bullets/wal-...)

Sellers of guns and ammunition assumed they were protected from liability by the federal Protection of Lawful Commerce in Arms Act.

save_ferris|5 years ago

In a nutshell, this would require congressional approval to pass. Both parties have expressed desire to alter the current legal protections that internet firms have, but it’s not clear if there will be a bipartisan consensus on what this change will look like if/when formal bills are introduced.

joshuamorton|5 years ago

Perhaps the most important line from the article:

> The Justice Department proposal is a legislative plan that would have to be adopted by Congress.

timmytokyo|5 years ago

This is an important point. Given that Congress is divided between the two parties, the chances of something like this becoming law are zero. So why is the proposal being made? It's a presidential election year, and the president is working the refs, trying to scare them away from anything that might make it even a little bit harder for him to get his "message" out.

nojito|5 years ago

>The Justice Department also will seek to make clear that tech platforms don’t have immunity in civil enforcement actions brought by the federal government, and can’t use immunity as a defense against antitrust claims that they removed content for anticompetitive reasons.

Oh boy...the costs of running Google, Twitter, Facebook and others... will quintuple overnight when Congress passes this.

adventured|5 years ago

It's wonderful to see the price of censorship by colluding monopolies is going to skyrocket.

I can't wait until the fines start raining down. They'll have earned every cent of the financial damages. The arrogant, biased platforms picked a fight they can't win with half the political power in the US.

This rapid, broad shift is why Larry and Sergey ran for the hills not long ago, abandoning Alphabet as fast as possible; they saw what was coming (including the anti-trust investigations). I bet they destroyed as much of their internal communication history as possible as well (legally of course, probably), so it can't be used against them or the company.

BelleOfTheBall|5 years ago

Ah, seeking to mess with Section 230 again, just like with the EARN It Act. Any company that stays headquartered in the USA if this passes is just begging for trouble.

fredthomsen|5 years ago

so funny considering the history of 230 and how prodigy was the inspiration for it because they modded user posts

cletus|5 years ago

That really doesn't change anything. If you want to do business in the US (and everyone does) then you're subject to US laws. "Jurisdiction" here is simply a question of how a country defines it and is willing and able to prosecute it.

For example, if two US citizens on US soil discuss insider trading of an Australian company that does not even do business in the US using trades on US brokers, those two individuals are in violation of the Australian Corporations Act and can be criminally prosecuted (by Australian authorities). Why? Because Australia claims jurisdiction over any Australian company.

Likewise, "sex tourism" with children in South East Asia is rampant and many countries are unwilling or unable to prosecute. Australia has deemed having sex with an underage person in a foreign country is likewise a crime in Australia that they can and do prosecute.

The US is able to to exercise a lot of power with international banks because they have the power to remove a financial institution's access to the US banking system. It's this stick that allowed the IRS to go after Swiss banks for complicity in US citizens evading US taxes.

snarf21|5 years ago

That's what makes laws like this so silly. Look at how companies split into pieces to avoid corporate tax. They'll simply do the same here. Such a waste of time and effort and it really only affects the small companies which don't have this problem anyway. So dumb.

jimbob45|5 years ago

The GDPR was objectively far more consequential and I don’t recall a mass exodus of companies from the EU.

seemslegit|5 years ago

Pray tell where should such a company go ?

Nasrudith|5 years ago

I wonder if the colloquial understanding of platforms and ownership got bad for reasons even aside from the blatant propaganda of special interests.

Back in even the 90s and 00s even the dim bulbs responding to other dim bulbs like Yahoo or AOL doing dumb stuff like shutting down child molestation victim support group channels from ham-handed attempts to try to moderate didn't lead to any idiots thinking that the government should somehow punish them even though it was rightfully called stupid and morally wrong. Was it because they actually understood the internet existed as many small sites as well as the big names?

mudil|5 years ago

When social media firms ban conservative voices, they need to be sued for interference with interstate commerce. Because that's what it is.

duskwuff|5 years ago

This is a gross misunderstanding of the Commerce Clause. It does not, and has never, placed any responsibility on private businesses to facilitate interstate commerce. (Nor is it clear that publishing an online posting is even a form of commerce.)

charwalker|5 years ago

I'm experimenting with Markdown in this comment and may edit to reformat links better. Actual comment:

Specifically when conservative voices are banned or a voice of any leaning? If a voice, right or left, tweets supporting hate or violence should that be removed with equal prejudice or left in place regardless? If it's a left wing voice posting bannable content 9/10 times, is that unfairly banning/censoring of left wing voices or simply the ratio that such ban-able content occurs? Does that cross over into publisher status? I don't think so.

I ask as the outrage over specifically conservative voices being censored has less to do with reality and more to do with loud people wanting attention as the bias against them is [mostly made-up](https://thehill.com/opinion/technology/440703-evidence-contr...).

There isn't censorship targeting right wing comments, just removal of extremist comments that often catch vocal right wing groups MORE OFTEN than left wing voices. There are loud people on the right (and some on the left) spouting misinformation, disinformation, debunked conspiracy theories (think QAnon), or outright threats and lies. These loud people and groups, when their content is removed, get more loud and right wing media platforms embrace this because it feeds on a common [Persecution Complex](https://en.wikipedia.org/wiki/Persecutory_delusion) that is largely non existent and often linked directly to some level of what some call privilege (White/Rich/establishment/etc). Yes, there are left wing media platofrmas that do the same but they are not in any way as close to the audience size as Fox News. They aren't even 'news' as in October of 2018, they [specifically noted in their ToS they are entertainment](https://mediabiasfactcheck.com/fox-news/). If you're getting the idea that banning of conservative voices is censorship from any voice/commentator hosted by Fox News, that's not news, it's entertainment!

I mean, look at this [summary of legal cases](https://www.theverge.com/2020/5/27/21272066/social-media-bia...). The majority are from conservatives with 1 item from a democrat and most are quickly struck down as the complain is founded on an inaccurate idea of the first amendment, namely that the users rights were not infringed and the lawsuit attacked the platforms rights! The platform has the right to not let you use it for your inaccurate, biased, or even malicious content. You are not infringed upon by being removed from that platform for those issues specifically or likely for anything the platform deems is against their ToS or rules. In r/conservative, this means any dissent from the established norm, even just pointing out polling data that invalidates the headline, is an immediate ban. That's allowed by Reddit and not under the jurisdiction of the government. It's not lawsuit worthy either. My rights aren't being infringed if a subreddit wants to be an echo chamber of misinformation protected by heavy censorship. There is some schadenfreude when that sub complains of censorship of right wing voices but rejects any sources stating it's not reflective of reality.

As a comparison, consider how climate science is presented across media. 99% of scientists agree Climate Change is aggravated by human activity and something we need to tackle; so does the Pentagon. 1% are the counter voices saying it isn't an issue or human activity is not a factor. These sides are then presented as equal (which is misinformation) and given equal weight like debated 1v1, not 99v1 like in reality. Same thing for right wing voices and voters across the US, [they are a minority](https://news.gallup.com/poll/15370/party-affiliation.aspx). 25% of those polled identify with the GOP even though the GOP holds more than 51% of seats in the Senate and after the 2016 election held more than 51% in the House for that term. When this minority holding a majority is 'attacked', everyone on connected outlets is going to hear about that (remember on some platforms this is entertainment, not news). That's basically the commentators first amendment right to protest the ban and definitely allowed under free speech. That doesn't mean it's an accurate portrayal of reality just like the censorship of right wing voices _seems_ biased but really isn't. In both cases, the minority is extremely vocal and active disguising (or simply ignoring) data stating otherwise. Conflict, even artificial, drives clicks and revenue and that's what entertainment is all about.

My bigger question is how is this interference with interstate commerce? I could see that argument applied to an influencer or commentator who is removed from their primary platform. If they lost revenue they may even have standing. But that still isn't an issue for Twitter, you can be removed from any platform you don't own and should always have your own site and system set up for hosting your content. That's preached by internet first media groups since YouTube rose to prominence. But it isn't a violation of interstate commerce.

snuxoll|5 years ago

I don’t see this passing constitutional muster. You have a right to free speech as do corporations - you can be ejected from a privately owned building for saying things the owner doesn’t agree with, the same applies to online platforms.

This is a open and shut first amendment case.

throwawaygh|5 years ago

> This is a open and shut first amendment case.

Yes, it is, but the outcome would be the opposite of what you expect.

constitutionally, private corporations can't be censored or compelled to speak when the speech is 1A protected speech.

but not all speech receives 1A protections, and Section 230 is about the type of speech that isn't protected by 1A. E.g., without 230, corporations could be sued for users' libel or held criminally liable for helping distribute lots of different types of speech (e.g., making terroristic threats, distributing child porn, facilitating illegal acts, etc.).

So, a case that hinges on 230 protections would be open-and-shut if 230 were repealed. Just with the opposite outcome of the one you're expecting.

hiram112|5 years ago

Not even close to open and shut.

You realize that an editorial posted on the NYTimes or Fox News site can get them sued for libel or defamation, right? This does, in fact, happen all the time. Read about several of the left leaning cable networks and their lawsuit with the 'Covington Kids'. They are a publisher, and are responsible for their content.

Google, Twitter, Reddit, etc. acted like platforms for many years, same as the telecoms, and nobody ever complained. Look at how much influence and cash they have. What could possibly cause them to risk this gold mine?

Donald Trump became president, and the liberal companies and employees in Silicon Valley / West Coast lost their nerve and resorted to censorship, decided to use their 'platforms' as their own private political tools.

Oops...

Now they can enjoy the same restrictions that other publishers have always had to deal with. Hopefully their shareholders realize the source of the issue, and boot the activist executives and employees, as it's now going to cost them a lot of money, all of which they brought on themselves.

* https://www.washingtonpost.com/lifestyle/style/cnn-settles-l...

scarface74|5 years ago

And this is all because the President got mad at Twitter on the right and the left always think more government is the answer.

This is what happens when you get government involvement in tech.

A4ET8a8uTh0|5 years ago

No. Nothing is ever this simple. Even cursory search would show that this is an ongoing saga linked to LEOs displeasure with encryption. I am not a fan of Trump personally, but no reason to let it cloud your judgment.

save_ferris|5 years ago

Why isn't the market holding Facebook accountable for the numerous transgressions we've seen coming out of that company over the last several years? Because people don't understand or care how the money is made, which fundamentally undermines the argument that the market is always right.

From broad, repeated invasions of online privacy to numerous scandals involving state-sponsored disinformation campaigns, Facebook shows time and again that they are not responsible corporate stewards of the internet.

Zuckerberg has got to be one of the least popular Fortune 500 CEOs and yet he's completely invincible, investors don't want to touch him.

So how do you propose to hold such a company accountable if not through regulation and oversight?

ybav|5 years ago

Congratulations Twitter! You improved the Internet in the same way as the Internet Archive by pushing too far.

Hope you are satisfied with all that awesome power.

sdwedq|5 years ago

I agree. I used to believe in absolute freedom of speech on the web. But then people start sending goatsx or whatever as joke in emails. I learned to avoid opening any links form certain friends.

MySpace, Facebook, and Twitter was nice clean space to hangout for a while. Then horrible and traumatic pictures and videos start showing up in my feed. I know the world is horrible place but I don't need constant reminder about it. I unfollowed as many people as I can.

Now as a parent, I cannot constantly monitor these supposedly safe sites. I have seen disgusting or violent videos on YouTube for Kids, Amazon Videos aimed at kids, and even some kids shows on Netflix.

These platform should be responsible for the content they host, no matter who uploaded. That would be one way to clean up flith.

That's why I will pay for cable TV again and let someone moderate content for me.

g-b-r|5 years ago

There are filters for kids and indeed it wouldn't be bad in any way if there were various filters on Youtube, Facebook etc., they just ought to be voluntary (or at most imposed by one's parents).

And... this isn't going to change "kids shows on Netflix"...