top | item 39459707

(no title)

maxbendick | 2 years ago

Imagine typing a description of your ideal self into an image generator and everything in the resulting images screamed at a semiotic level, "you are not the correct race", "you are not the correct gender", etc. It would feel bad. Enough said.

I 100% agree with Carmack that guardrails should be public and that the bias correction on display is poor. But I'm disturbed by the choice of examples some people are choosing. Have we already forgotten the wealth of scientific research on AI bias? There are genuine dangers from AI bias which global corps must avoid to survive.

discuss

order

anonym29|2 years ago

>Imagine typing a description of your ideal self into an image generator and everything in the resulting images screamed at a semiotic level, "you are not the correct race", "you are not the correct gender", etc. It would feel bad. Enough said.

It does this now, as a direct result of these "guardrails". Go ask GPT-4 for a picture of a white male scientist, and it'll refuse to produce one. Ask it for any other color/gender identity combination of scientist, and it has no problem.

You can make these systems offer equal representation without systemic, algorithmic discriminatory exclusion based on skin color and gender identity, which is what's going on right now.

mike_hearn|2 years ago

That's not the case. ChatGPT 4 will happily draw a white male scientist. I just tried it and it worked fine. A very handsome scientist it made too!

You might be thinking of a previous generation of OpenAI systems that did things like randomly stuffing the word "black" onto the end of any prompt involving people, detected by giving it a prompt of "A woman holding a sign that says".

OpenAI has improved dramatically in this regard. When ChatGPT/DALL-E were new they had similar problems to Gemini. But to their credit (and Sam Altman's), they listened. It's getting harder and harder to find examples where OpenAI models express obvious political bias, or refuse requests for Californian reasons. Surely there still are some examples, but there's no longer much worry about normal people encountering refusals or egregious ideological bias in the course of regular usage. I would expect there are still refusals for queries like "how do I build a bomb" and they've been trying to block other stuff like regurgitation of copyrighted materials, but that's perceived as much more reasonable and doesn't stir up the same feelings.

mpalmer|2 years ago

Imagine being able to configure the image generator with your own preferences for its output.