(no title)
tigeba | 3 years ago
The vast majority of the critique is that to the extent anonymization works, it does not produce the outcome the authors desire. They explicitly ask for group based discrimination to be pre-baked into any sort of AI system to produce equity, not equality.
"First, industry practitioners developing hiring AI technologies must shift from trying to correct individualized instances of “bias” to considering the broader inequalities that shape recruitment processes. Pratyusha Kalluri argues that AI experts should not focus on whether or not their technologies are technically fair but whether they are “shifting power” towards the marginalized (Kalluri, 2020). This requires aban- doning the “veneer of objectivity” that is grafted onto AI systems (Benjamin, 2019a, 2019b) so that technologists can better understand their implication—and that of the corporations within which they work—in the hiring process. For example, practitioners should engage with how the categories being used to sort, process, and categorize candidates may have historically harmed the individuals captured within them. They can then begin to problematize the assumptions about “gender” and “race” they are building into AI hiring tools even as they intend to strip racial and gender attributes out of recruitment."
onos|3 years ago