top | item 45138553

(no title)

plopilop | 5 months ago

Sooo... Should we ban Google too? It is also ordering the contents of its research results with algorithms. Similarly, HN and reddit order the contents of their front page with some algorithms, and in the case of Google and Reddit, the algorithm is personalized with the user's preferences.

Or do we only ban websites that design their algorithms to trigger strong emotional emotions? How do you define that? Even Musk doesn't go around saying that the algorithm is modified to promote alt right, instead he pretends it is all about "bringing balance back". Furthermore, I would argue that systems based on votes such as Reddit or HN are much more likely than other systems to push such content. We could issue a regulation to ban specific platforms or websites (TikTok, X...) by naming them individually, but that would probably go against many rules of free competition, and would be quite easily circumvented.

Not that I disagree on the effect of social medias on society, but regulating this is not as easy as "let's ban the algorithm".

discuss

order

ktosobcy|5 months ago

ERM, FB itself admited they made a research regarding emotional response to the content they show.

FB/X modus operandi is keep as much people for as long possible glued to the screen. The most triggering content will awaken all those "keyboard wariors" to fight.

So instead of seeing your friends and people you follow on there you would mostly see something that would affect you one way or another (hence proliferation of more and more extreme stuff).

Google is going downhill but for different reasons - they also care only about investors bottomline but being the biggest ad-provider they don't care all that much if people spend time on google.com page or not.

plopilop|5 months ago

Oh, I know that strong emotions increase engagement, outrage being a prime candidate. I have also no issue believing that FB/TikTok/X etc aggressively engage in such tactics, e.g. [0]. But I am not aware of FB publicly acknowledging that they deliberately tune the algorithm to this effect, even though they carried some research on the effects of emotions on engagement (I would love to be proven wrong though).

But admitting FB did publicly say they manipulate their users' emotions for engagement, and a law is passed preventing that. How do you assess that the new FB algorithm is not manipulating emotions for engagement? How do you enforce your law? If you are not allowed to create outrage, are you allowed to promote posts that expose politicians corruption? Where is the limit?

Once again, I hate these algorithms. But we cannot regulate by saying "stop being evil", we need specific metrics, targets, objectives. A law too broad will ban Google as much as Facebook, and a law too narrow can be circumvented in many ways.

[0] https://www.wsj.com/tech/facebook-algorithm-change-zuckerber...