It's common in fintech for data/ML models to go through similar overview. If you happen to disenfranchise a set of people because your model said not to lend to them, you risk legal jeopardy.
To clarify, I think it's good that this is a practice.
There are so many ways you can accidentally systematize racism in software like automated lending.
In the past there were explicitly racist policies like redlining. This leads to a historical data set of loan denials to people in specific racial group. If that group has other traits that correlate to their race, e.g. the neighborhood they live in then you could presumably have a model that doesn't explicitly have race as a feature but uses that historical data and some subset of racially correlated features and as a result disproportionately excludes people of that race.
A review doesn't necessarily mean you need to resolve all diversity/inclusion issues. It can merely require that you identify the issues and understand the risks of not resolving them.
charcircuit|4 years ago
myownpetard|4 years ago
In the past there were explicitly racist policies like redlining. This leads to a historical data set of loan denials to people in specific racial group. If that group has other traits that correlate to their race, e.g. the neighborhood they live in then you could presumably have a model that doesn't explicitly have race as a feature but uses that historical data and some subset of racially correlated features and as a result disproportionately excludes people of that race.
murph-almighty|4 years ago
criley2|4 years ago
Thorrez|4 years ago
londons_explore|4 years ago