top | item 39535031

(no title)

hemogloben | 2 years ago

Complaints about historically inaccurate racial makeups seem weird to me. I guess people really do want AI to perfectly supplant image creation or something, but to me the tradeoff seems clear:

* Prioritize diversity in image creation by adding guardrails so the AI doesn't become a tool of a minority hate spewing population

* Historical accuracy that can be prompted to provide prejudiced imagery

To be clear, we aren't talking about a camera that swaps people's race for 'diversity'. We're talking about an image generation algorithm that adds a layer of diversity on top to prevent misuse. Yeah, of course this results in weird behavior sometimes... That's kinda literally the point?

Who is honestly confused by this? Is it necessary for an AI image generation algo to spit out historically accurate images of Gettysburg when prejudiced misuse is the far more likely outcome of that accuracy?

And importantly, when a company makes that value judgement, to prefer prejudice defense over historical accuracy, that's seen as pretending history changed rather than what it actually is, which is a defense against a mechanism of abuse?

It just seems like an absurd and disingenuous over-reaction and lack of pragmatism. Yeah. This is a tragedy of the commons. Make prejudice less acceptable and you can have the AI gen you want.

Note: Obviously, it's kinda moot as anyone who seriously wants to generate hate speech/imagery will just move to something that allows that, but its still perfectly acceptable for a company to draw a line and say "not on our software".

discuss

order

alexey-salmin|2 years ago

> We're talking about an image generation algorithm that adds a layer of diversity on top to prevent misuse.

Wouldn't a faithful representation of underlying data without artificial biases be the best way to prevent misuse?

hemogloben|2 years ago

No. As a society we have inbuilt bias. Every step from image capture, to image selection, to image training, to performance evaluation includes bias.

That isn't just true of AI. Electrically, chemically, experiments must always consider their environment and account for confounding factors.

ffgjgf1|2 years ago

> We're talking about an image generation algorithm that adds a layer of diversity on top to prevent misuse

So if LLMs did the same (i.e. purposefully distorted facts and historical events due arbitrary and political reasons) it would also be acceptable?

> historically accurate images of Gettysburg when prejudiced misuse is the far more likely outcome of that accuracy?

This is pure conjecture. But the answer is no, the only acceptable behavior in these circumstances would be for the model to refuse to generate the image and explicitly explain why this type of censorship is necessary.

> It just seems like an absurd and disingenuous over-reaction and lack of pragmatism.

That does sound explicitly Orwellian..

> but its still perfectly acceptable for a company to draw a line and say "not on our software".

Yes, it’s even more acceptable for for anyone to criticize that company for its decisions, make fun of its work culture and to mock its CEO.

hemogloben|2 years ago

>So if LLMs did the same (i.e. purposefully distorted facts and historical events due arbitrary and political reasons) it would also be acceptable?

You're talking as if there is a way to get an 'unbiased' AI. There isn't. It is inherently biased by its training, it hallucinates, and it is further biased by its prompt.

The whole endeavor is to bias it.

I'd prefer that AI be labelled on the tin for what it's biases are attempting to do, and promote diversity and deter abuse seems like a perfectly reasonable metric to use.

If that's not good for you fine, but you can't pretend that you're utterly baffled why they would make that choice over any other.

There literally ISN'T a way to not have a biasing prompt.

polski-g|2 years ago

> Who is honestly confused by this? Is it necessary for an AI image generation algo to spit out historically accurate images of Gettysburg when prejudiced misuse is the far more likely outcome of that accuracy?

If I wanted a black female Nazi officer, or a pregnant female pope, I would ask for it. I don't need my input query secretly rewritten for me.

Taurenking|2 years ago

> Prioritize diversity in image creation by adding guardrails so the AI doesn't become a tool of a minority hate spewing population

> We're talking about an image generation algorithm that adds a layer of diversity on top to prevent misuse

Huh? That's exactly the reason that caused them to withdraw Gemini in the first place

How is manipulating history "over-reaction" or wanting factual/accurate data/images "prejudiced"?