This is a case of people actively building and deciding to use a tool with plausible deniability built-in. How would you regulate people when they shirk any accountability? It is much more reasonable to require that people use a tool that shows its work.
everforward|3 years ago
This happens with or without AI. It's not too hard to make an algorithm that discriminates against people without doing so explicitly. You can use certain bits of data as proxies for whatever characteristic you want to discriminate against (e.g. ZIP code, household income, marital status, etc).
Due to the myriad of ways to arrive at a discriminatory conclusion, it's easier to regulate results than tools.
One potential solution to your question is to make the laws carry strict liability (I'm assuming the EU has something along those lines). Plausible deniability no longer exists because intent doesn't matter. The company is liable if someone can demonstrate discrimination, regardless of whether it was intentional or accidental.
That ends up pushing towards something similar to what you want. It encourages a tool that can show its work to fend off lawsuits without being directly tied to the tool itself. The other alternative is extensive testing to make sure discrimination doesn't happen, but I think that will still be worrying to companies due to the inability to prove a negative.