While experimenting with digital art and AI tools, I noticed how aggressively filters block historical, political, or artistic imagery. I wrote about how this impacts art, research, and cultural memory. Curious how others here see this balance between safety and censorship.
https://tsevis.com/censorship-ai-and-the-war-on-context
Nextgrid|6 months ago
The initial idea was good and very much needed to eliminate (or at least heavily reduce) long-established racism/bigotry.
But the problem is that a lot of people started to abuse it as a virtue-signalling mechanism and/or a way to justify their jobs, leading to insanities like renaming the Git “master” branch.
I suspect AI safety is the same. There’s a grain of truth and usefulness to it, but no AI safety person will intentionally declare “we figured out how to make models safe, my job here is done”, so they have to always push the envelope, even towards ridiculous levels.
stuaxo|6 months ago
Many of the arguments against just seem to come down to "I want to be a jerk".
bjourne|6 months ago
tsevis|6 months ago
armchairhacker|6 months ago
Despite this, AIs get fooled to this day. There are still jailbreaks for GPT-5 and nudity and piracy on YouTube.
The only way to distinguish “good” from “bad” is competence, which has never existed on a large scale.
enceladus06|6 months ago
tsevis|6 months ago
unknown|6 months ago
[deleted]
sdotdev|6 months ago
tsevis|6 months ago