top | item 39534854

(no title)

perceptronas | 2 years ago

Their product embodied their values. It turned out that their values are quite radical when exposed to general public. In my opinion, unless there are people and cultural changes - its quite hard to imagine their long term success in this space

discuss

order

foota|2 years ago

I don't think it's radical that when prompted with something like "Generate photos of doctors", that it's reasonable to return a set of images that shows diversity (e.g., instead of being a bunch of white men), even if that isn't representative of a "population sample".

I guess though there were unintended consequence where I imagine they're prompting the model with something along the lines of "and remember to be diverse!", and there are obviously some cases where this isn't a good idea. In particular, when the prompt itself is for something that is explicitly racial or where the result is "charged".

E.g., if someone asks for photos of white people, the AI shouldn't generate photos of people that aren't white (and fine, it might return a disclaimer that it only generate white people because you asked it to).

More nuanced though are situations like asking it about historically evil people (e.g., Nazis, as was one of the examples I've seen) but also more benign things like British monarchs or something. I think trying to figure out what kind of results to "inject" diversity into isn't easy though, since it feels like there are many edge cases.

mike_hearn|2 years ago

> I don't think it's radical that when prompted with something like "Generate photos of doctors", that it's reasonable to return a set of images that shows diversity

Historically Google had a very simple solution to globally differing expectations about query results: IP or account geolocation. Query personalization by geography is one of the biggest quality wins in web search. Generalizing, an AI built with the same values and ethos as classical Google web search would respond to "Generate a photo of doctors" differently depending on where in the world you asked it from.

That solution also fixes many other cases that aren't third rails, like "Show me a good nearby restaurant serving local food" which you can't solve by attempting to hallucinate a non-existent restaurant that serves a menu of every conceivable dish weighted by population size.

It's unclear why this solution wouldn't resolve all their stated concerns, so we might infer that their actual goals differ from their stated goals. For example, influencing the people who use their services.

Ekaros|2 years ago

And I think it is radical. If the consumer wants certain race, gender, disability and so on they can do it via prompt.

If they want diversity do it by corpus. Get images from Africa and Asia and so on... Feed that to model to get there...

jart|2 years ago

I don't think Google sets the agenda. They get their orders like the rest of us, and then probably handed it over to an army of low-paid offshore contractors to implement. How else could you explain Gemma believing Abe Lincoln was black? If Googlers had done the mind-killing work of RLHF training themselves, things would have turned out differently. They were however nice enough to give us a version of the model that doesn't have RLHF training and it's illuminating to compare how it responds to certain queries, because oftentimes the sentences will be the same and only have specific key words or phrases within the sentence changed.

Taurenking|2 years ago

Wishful thinking. The CEO of Gemini expressed openly racist views on Twitter. Those views are reflected in the tool built.

> I don't think Google sets the agenda

Soo, you're saying they did not internally test this at all, and delegated that to whoever sets their agenda?

visarga|2 years ago

Their values, but there is also common sense. If they start pushing out unbelievable things like a black queen of England ... they are hurting their own goals. Diversification should stop at the boundary of believability.

astrange|2 years ago

A company isn't "their", it's made of people, and not all of those people see all of their products before they're launched.