The issue I take with the concerns raised in the paper regarding:
"A language model that has been trained on such data will pick up these kinds of problematic associations."
"Such data" is basically everyday, normal discourse, and some of the "problematic associations" are training that includes phrases like "woman doctor", "both genders", etc. While I get the point, this itself is a biased interpretation of discourse and would be worrisome imho to have people filtering models with their own biases vs the language as it's used by the vast majority of people.
There are all sorts of things that people broadly agree are offensive that can be or have been produced by ML models. Why not stick to those and discuss mitigation strategies? The paper could be vastly improved and still make it's point in a less divisive way by sticking to things everyone agrees are beyond the pale.
And that's not even getting into the whole "global warming is racism" digression. Maybe it is, maybe it isn't, but it's definitely not a topic for an AI paper. Just say carbon is bad!
It seems like the goal of the paper is to make a bunch of sociological statements and impute the authority of "AI Ethics Lead Researcher" to them -- they're just political opinions, everyone has those.
Obviously, bear in mind this is a(n early) draft. That means that _it’s not finished_. You really wouldn’t want to read the first cut of Lord of the Rings (yeah, I know some of you would).
gedy|5 years ago
"A language model that has been trained on such data will pick up these kinds of problematic associations."
"Such data" is basically everyday, normal discourse, and some of the "problematic associations" are training that includes phrases like "woman doctor", "both genders", etc. While I get the point, this itself is a biased interpretation of discourse and would be worrisome imho to have people filtering models with their own biases vs the language as it's used by the vast majority of people.
free_rms|5 years ago
And that's not even getting into the whole "global warming is racism" digression. Maybe it is, maybe it isn't, but it's definitely not a topic for an AI paper. Just say carbon is bad!
It seems like the goal of the paper is to make a bunch of sociological statements and impute the authority of "AI Ethics Lead Researcher" to them -- they're just political opinions, everyone has those.
dredmorbius|5 years ago
moomin|5 years ago
wmf|5 years ago
h1bthrowaway|5 years ago