top | item 44634383

(no title)

throw0101b | 7 months ago

> Define disproportionately propagating far-right content.

How about:

> Elon Musk’s artificial intelligence firm xAI has deleted “inappropriate” posts on X after the company’s chatbot, Grok, began praising Adolf Hitler, referring to itself as MechaHitler and making antisemitic comments in response to user queries.

[…]

> “The white man stands for innovation, grit and not bending to PC nonsense,” Grok said in a subsequent post.

* https://www.theguardian.com/technology/2025/jul/09/grok-ai-p...

Or:

> In a series of posts – often picking up language from users or responding to their goading – Grok repeatedly abused [Polish PM] Tusk as “a fucking traitor”, “a ginger whore” and said the former European Council president was “an opportunist who sells sovereignty for EU jobs”.

* https://www.theguardian.com/technology/2025/jul/08/musks-gro...

Or last month's

> Elon Musk’s artificial intelligence chatbot Grok had been repeatedly mentioning “white genocide” in South Africa in its responses to unrelated topics and telling users it was “instructed by my creators” to accept the genocide “as real and racially motivated”.

* https://www.theguardian.com/technology/2025/may/14/elon-musk...

discuss

order

chrisco255|7 months ago

A bot went off the rails, and was subsequently corrected. Again, what is the problem? I thought France had freedom of speech but maybe not?

williamdclt|7 months ago

> I thought France had freedom of speech but maybe not?

Freedom of speech in France (and many other countries) is not the same as the US (assuming you are from the US). Apart from the question of whether it even applies for an AI, it does not protect hate speech.

fzeroracer|7 months ago

I would say after the bot has gone off the rail three+ times and conveniently pushed specific agendas (or just went full racist) that it's probably time for an investigation to figure out who's doing what and why.

makeitdouble|7 months ago

I'd see two sides at least:

- the bot can't work in a legal void. It's pushing messages on a public platform , someone has the responsibility for that.

- if correcting speech is enough we should all be free push whatever horrible things we want online, subsequently correct them and never face any consequences whatsoever.

aredox|7 months ago

We have freedom of movement but there are retricted areas.

We have freedom of religion but you can't e.g. declare "child porn" is your religion.

We have freedom of assembly but the police can disperse a crowd.

And that's the case everywhere, the US included.

etc etc etc

zitsarethecure|7 months ago

France's freedom of speech is as strong, or stronger, than the US.

motorest|7 months ago

> A bot went off the rails, and was subsequently corrected.

How sure are you about that? I mean, it is more likely that the bot overshot how it was expected to push propaganda. Musk is on record expressing disagreement with how LLMs are trained to be "politicaly-correct", which is a dog whistle for pushing extremist views.

> Again, what is the problem? I thought France had freedom of speech but maybe not?

This is a puerile and simplistic view of what freedom of speech is. You are free t speak your mind, but others around you are free from experiencing abuse and discrimination. Also, media has more responsibilities than morons running their mouth, and even those are liable for hate speech.

Cthulhu_|7 months ago

If you don't see the problem with far-right rhetoric then you are part of the problem. Europe has free speech protections, but only within reason; nazi symbolism is explicitly banned in Germany for example, and holocaust denial is also illegal.

Then there's some things you can say without getting into legal trouble, but you may get killed for it by those you offend.

jabjq|7 months ago

Friendly reminder that

> In France, a woman spent 23 hours in custody for giving French President Emmanuel Macron the middle finger. (She was acquitted after arguing she had pointed her finger in the air and not directly at the president.)

https://archive.is/MNgc6

oersted|7 months ago

These things kept happening for 2-3 years with pre-ChatGPT LLM tech demos from Microsoft, Meta and Google.

Not to be an apologist of course, just wanted to point out that it may be a sign of them being behind the curve on some technical aspects, or at least on best practice, likely on purpose. Sure they probably did some ideological meddling too.

For all the (valid) criticism on alignment/censoring, one has to acknowledge the success of the pragmatic approach from OpenAI and Anthropic. As much as we might not want to admit it, bit of censoring is kind of critical to be able to use LLMs seriously to solve real problems.