(no title)
hemogloben | 2 years ago
That isn't just true of AI. Electrically, chemically, experiments must always consider their environment and account for confounding factors.
hemogloben | 2 years ago
That isn't just true of AI. Electrically, chemically, experiments must always consider their environment and account for confounding factors.
ffgjgf1|2 years ago
Implying that that’s in any way similar to what Google et al. are doing us rather bizarre. Even if your initial point was valid they have no way non-biased way to measure these biases.
So they just end up increasing the total “amount” of bias not the other way around.
alexey-salmin|2 years ago
You suggest to aim for a model that follows some "true reality" which is not possible. Not even science can achieve this because our chase for the true reality never ends, we can only get closer (and often even the opposite happens).
> Electrically, chemically, experiments must always consider their environment and account for confounding factors.
Sounds legit. "This experiment data doesn't look diverse enough, please apply a bunch of biases to it. Make sure to follow the biases I like and avoid the ones I dislike. Don't mention any of this in the paper and don't publish the raw data".
hemogloben|2 years ago
It sounds like that's acceptable to you because you think current state of training corpus == current state of society. And you view any bias in prompt as bias.
The truth is most of this ML happens in corpus selection + prompt selection. There literally ISN'T a way to avoid bias. So the problem becomes what bias do you select.
And in that scenario choosing abuse decreasing measures seems like the most pragmatic (to me).
dkn775|2 years ago