top | item 39535166

(no title)

hemogloben | 2 years ago

No. As a society we have inbuilt bias. Every step from image capture, to image selection, to image training, to performance evaluation includes bias.

That isn't just true of AI. Electrically, chemically, experiments must always consider their environment and account for confounding factors.

discuss

order

ffgjgf1|2 years ago

> Electrically, chemically, experiments must always consider their environment and account for confounding factors

Implying that that’s in any way similar to what Google et al. are doing us rather bizarre. Even if your initial point was valid they have no way non-biased way to measure these biases.

So they just end up increasing the total “amount” of bias not the other way around.

alexey-salmin|2 years ago

What's wrong with an AI model that correctly models the current state of society we live it? It's called "model" for a reason.

You suggest to aim for a model that follows some "true reality" which is not possible. Not even science can achieve this because our chase for the true reality never ends, we can only get closer (and often even the opposite happens).

> Electrically, chemically, experiments must always consider their environment and account for confounding factors.

Sounds legit. "This experiment data doesn't look diverse enough, please apply a bunch of biases to it. Make sure to follow the biases I like and avoid the ones I dislike. Don't mention any of this in the paper and don't publish the raw data".

hemogloben|2 years ago

It isn't the current state of society. It's the current state of the training corpus + prompt. Those implicitly include bias.

It sounds like that's acceptable to you because you think current state of training corpus == current state of society. And you view any bias in prompt as bias.

The truth is most of this ML happens in corpus selection + prompt selection. There literally ISN'T a way to avoid bias. So the problem becomes what bias do you select.

And in that scenario choosing abuse decreasing measures seems like the most pragmatic (to me).

dkn775|2 years ago

It locks our future knowledge to current biases and possibilities.