top | item 17180183

(no title)

BasHamer | 7 years ago

Data-driven algorithms are discriminating based upon undesirable/illegal vectors; they are utterly amoral in optimizing their solutions. Even if the algorithm does not have access to the "Age" field, then there are plenty of proxies, like what reunion tour you liked. And the same goes for race, gender, sexual identification, religion, etc.

To solve this we either need the training data to have no illegal/undesired discrimination, or we make the system moral. I think the first is impossible, and the second is what we will do sooner or later.

discuss

order

biztos|7 years ago

How would you make the system moral?

Let's say "moral" means "won't discriminate based on X" and the same "system" is used by everyone, which of course it wouldn't be.

So do you make up a bunch of fake "people" who are equal in everything except X, and test that it doesn't advantage/disadvantage the X's? Would that even be possible if the "system" is getting its inputs from social media?

Do you do mandate some kind of audit of the system's decisions, and require it to choose on average the same percentage of Xs as... what? As there are Xs in the general population? In the candidate pool?

I'd love for this kind of thing to work but even in an idealized hypothetical version it's hard to see how it could.

I think in tech we've already shown that shame is no barrier to hiring discrimination, and as HR+AI type filtering systems preselect candidates for you it'll be harder and harder for you or the government or the disadvantaged candidates to even know if you're discriminating.

You'll judge the "system" based solely on whether the set of candidates you got achieved the outcome you needed.

BasHamer|7 years ago

train it.

Give it examples that we consider moral and examples of what we consider immoral and have it figure it out. The solutions that the algorithms create are less complex than the data that they base the solutions on; so it should be relatively easy for it to model these solutions as data. We would have to train it on what we consider moral and immoral; that would require us to visualize the solutions in a way that a human can make the determination and provide the feedback.

As far as how we get to the solution, that will probably come when there is a liability for discrimination. So lawsuits like the one mentioned. I think that mandating does not work well, it would be more appropriate to make people liable for the decisions made by amoral systems. This liability would create a demand for moral systems.

s73v3r_|7 years ago

"How would you make the system moral?"

Don't use it.