(no title)
sevenless | 9 years ago
Programming a computer to perform a task is a powerful way to reveal hidden assumptions. What the problem of machine learning on human data reveals, is there is no objectivity. Our most 'objective' models will simply learn and then reinforce biases and prejudices and cause harm to people. We can't build objective, neutral, value-free models when it comes to human behavior, because humans change their behavior in response to the models. When we reject stereotypes we are imposing a set of values, just as much as when we embrace them. Machine learning forces us to confront the fact that this is exactly what we are doing.
There's a kind of philosophical crisis going on here. We need an entirely new way to think about the limits of objectivity in human sciences, and how to create ethical models in the presence of feedback between science and society. The language of objective physical science doesn't work.
No comments yet.