top | item 46546941

(no title)

bdbdjfjrbrbf | 1 month ago

> Other image generation models aren't trained on porn so they don't know how to produce it.

Not true. Google’s models are excellent at making adult content, they just have aggressive pre and post filtering, but it’s not perfect and glimpses of its dirty mind slip through the cracks.

(I’m not sure about OpenAI, its filtering is much more aggressive so it’s harder to probe. I’ve seen it make sexualized content, but I haven’t seen anything that it would necessarily have learned from porn.)

Grok’s lack of anything resembling effective filtering is an intentional product choice, not a training data limitation. Not surprising, coming from “pedo guy” with a breeding fetish and an obsession with catgirls. What horrors we might find if we searched his drives…

discuss

order

labrador|1 month ago

This is what Gemini says although it may be a hallucination

Google invests in the safety of its training data from the outset. This involves efforts to filter out problematic content, such as violent, offensive, or sexually explicit material, before or during the model training phase. The aim is to ensure the models are trained on appropriate data consistent with Google's AI Principles and policies. The company has a zero-tolerance policy for illegal content, such as child sexual abuse material (CSAM), and works to ensure such material is not included in the datasets.