(no title)
ASpring | 4 years ago
I think the author is generally correct but there is a lot of focus on algorithmic design and not on how we collectively decide what is fair and ethical for these algorithms to do. Right now it is totally up to the algorithm developer to articulate their version of "fair" and implement it however they see fit. I'm not convinced that is a responsibility that belongs to private corporations.
fennecfoxen|4 years ago
Private corporations are, by and large, the entities which execute their business using these algorithms, which their employees write.
They are already responsible for business decisions whether made using computers or otherwise. Indeed, who else would possibly manage such a thing? This is tantamount to saying that private corporations should have no business deciding how to execute their business — definitely an opinion you can have, it's just that it's an incredibly statist-central-planning opinion the end.
naasking|4 years ago
No business is allowed to discriminate against protected groups. That's arguably a third-party standard for fairness, but I don't think this qualifies as central planning.
I see no reason why other types of third-party standards would be impossible or infeasible for machine learning applications.
bluesummers5651|4 years ago
I think the upshot to me is that businesses, whether it's one operating in criminal judicial risk assessment or advertising or whatever, don't really make obvious which definition (if any) of fairness that they are enforcing, and thus it becomes difficult to determine whether they are doing a good job at it.
ASpring|4 years ago
Rather I view it more along the lines of how the US currently regulates accessibility standards for the web or enforces mortgage non-discrimination in protected categories. The role of government here is identify a class of tangible harms that can result from unfair models deployed in various contexts and to legislate in a way to ensure those harms are avoided.
ZeroGravitas|4 years ago
If you trained a model to predict the outcome purely from the protected class and it was successful (in terms of predictive power), does that mean fairness is efectively impossible?
e.g. if you trained an educational performance predictor on wealth of parents, then I'd guess it would do reasonably well. And there is the argument that your parents are rich because they're smart and you are genetically connected to them.
But there's obvious counterexamples, like children adopted by rich families or children of refugees (who may have been professors or surgeons in their home country).
So if we can't avoid the bias in that extreme example, then adding extra data is only going to bury that truth under confusion.
I'm not sure we're ready to admit that we disadvantage the children of the poor to ourselves, which will make this whole AI bias thing a tricky conversation to have.