(no title)
dotandgtfo | 7 months ago
Content which is not directly illegal is covered by the voluntary code of conduct on disinformation [1]. If you can point out to me a provision which allows the governments to force platforms to remove content which is not explicitly illegal I'll be very impressed. Because it ain't there.
It does say that algorithms should be tweaked to not spread "damaging disinformation" - e.g. reduce amplification of it. And it does say that these platforms shouldn't be allowing users who create disinformation to monetize their content like the good old Macedonian troll farms [2]
But in the end - there are no sanctions. These are guidelines. And if platforms consistently ignore these and it turns into a systemic harm then fines can start piling up. But never for a single piece of content - just a systemic malpractice.
Yes. It's a clever law because it doesn't give governments the power to remove content which is not illegal. But it does still force platforms to do something about their incentives they create for third-parties around content.
And no. Throwing the baby out with the bathwater and letting foreign companies self-regulate illegal content is frankly ludicrous considering their stellar track record of not giving a shit because it's the cheapest option.
[1] https://digital-strategy.ec.europa.eu/en/library/code-conduc...
[2] https://www.cambridge.org/core/journals/ps-political-science...
mytailorisrich|7 months ago
Someone's "misleading information" is someone else's political campaigning.
The EU commission is guilty of hypocrisy and doublespeak on this when it states that "the Code of Conduct aims to combat disinformation risks while fully upholding the freedom of speech" when those two things are obviously mutually exclusive.