(no title)
jmagoon | 3 years ago
The data bias issue is about having way more samples of class X than of class Y, or class Y sharing an unknown but correlated feature (medical images with labels have this problem) that the developer doesn't identify, or any other number of "biases", like all the images being too bright, or taken with a camera that isn't identical to the one that's going to get deployed in production, etc., etc.
There are real issues that can be fixed / engineered / understood in terms of producing reliable output, and xAI absolutely helps with that! But for whatever reason journalists don't seem to understand that these systems need normal engineering safeguards, like any automated system, and bring it back to one poorly engineered model to talk about Big Bad Racist AI always denying loans based on race.
No comments yet.