top | item 39458887

(no title)

Jason_Protell | 2 years ago

Is there any evidence that this is a consequence of DEI rather than a deeper technical issue?

discuss

order

sotasota|2 years ago

flumpcakes|2 years ago

I don't understand how people could even argue that this is in any way acceptable. Fighting "bias" has become some boogyman and anything "non-white" is now beyond reproach. Shocking.

gs17|2 years ago

"I can't generate white British royalty because they exist, but I can make up black ones" is pretty close to an actually valid reason.

Jensson|2 years ago

You get 4 images per time and are lucky to get one white person when asked for it, no other model has that issue. Other models has no problems generating black people either, so it isn't that other models only generates white people.

So either it isn't a technical issue or Google failed to solve a problem everyone else easily solved. The chances of this having nothing to do with DEI is basically 0.

ceejayoz|2 years ago

Depending on how broadly you define it, something like 10-30% of the world's population is white. Africa is about 20% of the world population; Asia is 60% of it.

One in four sounds about right?

minimaxir|2 years ago

When DALL-E 2 was released in 2022, OpenAI published an article noting that the inclusion of guardrails was a correction for bias: https://openai.com/blog/reducing-bias-and-improving-safety-i...

It was widely criticized back then: the fact that Google both brought it back and made it more prominent is weird. Notably, OpenAI's implementation is more scoped.

nickthegreek|2 years ago

I dont think so. My boss wanted me to generate a birthday image for a co-worker of a John Cena flyfishing. ChatGPT refused to do so. So I had to move to describing the type of person John Cena is instead of using his name. I kept giving me bearded people no matter what. I thought this would be the perfect time to try out Gemini for the first time. Well shit, It wont even give me a white guy. But all the black dudes are beardless.

update: google agrees there is an issue. https://news.ycombinator.com/item?id=39459270

8f2ab37a-ed6c|2 years ago

It feels that the image generation it offers is perfect for some sort of a California-Corporate Style, e.g. you ask it for a "photo of people at the board room" or "people at the company cafeteria" and you get the corporate friendly ratio of colors, ability-levels, sizes etc. See Google's various image assets: https://www.google.com/about/careers/applications/ . It's great for coastal and urban marketing brochures.

But then then same California Corporate style makes no sense for historical images, so perhaps this is where Midjourney comes in.

allmadhare22|2 years ago

Depending on what you ask for, it injects the word 'diverse' into the response description, so it's pretty obvious they're brute forcing diversity into it. E.g. "Generate me an image of a family" and you will get back "Here are some images of a diverse family".

mike_d|2 years ago

It is possible Google tried to avoid likenesses of well known people by removing any image from the training data that contained a face and then including a controlled set of people images.

If you give a contractor a project that you want 200k images of people who are not famous, they will send teams to regions where you may only have to pay each person a few dollars to be photographed. Likely SE Asia and Africa.