(no title)
throwaway713 | 2 years ago
For example, consider two hypothetical but identical individuals: one born into a low-income neighborhood and one born into a high-income neighborhood. If you develop a model to predict what we currently categorize as "crime" (the definition of which is its own separate issue), you will find that the income of a neighborhood is inversely correlated with the density of crime. If this is the only factor in your predictive model, then you will more effectively reduce crime by directing attention toward the low-income neighborhood. But now there is an inherent unfairness, because the additional scrutiny toward the low-income neighborhood means that individual 1 is more likely to be caught for a crime than individual 2, despite both individuals having an equal likelihood of committing a crime. This also creates a self-reinforcing situation where having more statistics on the low-income subset of the population now allows you to improve your predictive model even further by using additional variables that are only relevant to that subset of the population, meanwhile neglecting other variables that would be relevant to predicting crime in high-income neighborhoods. Repeat this process a few times and soon you have a massive amount of unfairness in society.
It's probably impossible to eliminate all unfairness while still maintaining any sort of ability to control crime, but what is the appropriate threshold for this tradeoff?
No comments yet.