(no title)
hemogloben | 2 years ago
It sounds like that's acceptable to you because you think current state of training corpus == current state of society. And you view any bias in prompt as bias.
The truth is most of this ML happens in corpus selection + prompt selection. There literally ISN'T a way to avoid bias. So the problem becomes what bias do you select.
And in that scenario choosing abuse decreasing measures seems like the most pragmatic (to me).
No comments yet.