top | item 39458178

(no title)

sotasota | 2 years ago

Because the wrongness is intentional.

discuss

order

chatmasta|2 years ago

Exactly. Sure this particular example is driven by political rage, but the underlying issue is that the maintainers of these models are altering them to conform to an agenda. It's not even surprising that people choose to focus on the political rage aspect of it, because that same political rage is the source of the agenda in the first place. It's a concerning precedent to set, because what other non-political modifications might be in the model?

noahtallen|2 years ago

Well, every model is altered to conform to an agenda. You will train it on data, which you have personally picked (and is therefore subject to your own bias), and you'll guide its training to match the goal you wish to achieve with the model. If you were doing the training, your own agenda would come into play. Google's agenda is to make something very general that works for everyone.

So if you're trying to be as unbiased as humanly possible, you might say, just use the raw datasets that exist in the world. But we live in a world where the datasets themselves are often biased.

Bias in ML and other types of models is well-documented, and can cause very real repercussions. Poor representation in datasets can cause groups to be unfairly disadvantaged when an insurance premium or mortgage is calculated, for example. It can also mean your phone's ML photography system doesn't expose certain skin colors very well.

Even if it was trained with a statistically representative dataset (e.g. about 2/3 of the US is white), you want your model to work for ALL your customers, not just 2/3 of them. Since ML has a lot to do with statistics, your trained model will see "most of this dataset is white" and the results will reflect that. So it is 100% necessary to make adjustments if you want your model to work accurately for everyone, and not just the dominant population in the dataset.

Even if we aren't using these models for much yet, a racist AI model would seriously harm how people trust and rely on these models. As a result, training models to avoid bias is 100% an important part of the agenda, even when the agenda is just creating a model that works well for everyone.

Obviously, that's gone off the rails a bit with these examples, but it is a real problem nonetheless. (And training a model to understand the difference between our modern world and what things were like historically is a complex problem, I'm sure!)

epistasis|2 years ago

Is it intentional? You think they intentionally made it not understand skin tone distribution by country? I would believe it if there was proof, but with all the other things it gets wrong it's weird to jump to that conclusion.

There's way too much politics in these things. I'm tired of people pushing on the politics rather than pushing for better tech.

bakugo|2 years ago

> Is it intentional? You think they intentionally made it not understand skin tone distribution by country? I would believe it if there was proof, but with all the other things it gets wrong it's weird to jump to that conclusion.

Yes, it's absolutely intentional. Leaked system prompts from other AIs such as DALL-E show that they are being explicitly prompted to inject racial "diversity" into their outputs even in contexts where it makes no sense, and there's no reason to assume the same isn't being done here, since the result seems way worse than anything I've seen from DALL-E and others.

Workaccount2|2 years ago

>I'm tired of people pushing on the politics rather than pushing for better tech.

I'm surprised you're not attacking google over this then...

robswc|2 years ago

I mean, I asked it for a samurai from a specific Japanese time period and it gave me a picture of a "non-binary indigenous American woman" (its words, not mine) so I think there is something intentional going on.