top | item 18189301

(no title)

ewjordan | 7 years ago

If they were using any sort of neural networks approach with stochastic gradient descent, the network would have to spend some "gradient juice" to cut a divot that recognizes and penalizes women's colleges and the like. It wouldn't do this just because there were fewer women in the batches, rather it would just not assign any weight to those factors.

Unless they presented lots of unqualified resumes of people not in tech as part of the training, which seems like something someone might think reasonable. Then, the model would (correctly) determine that very few people coming from women's colleges are CS majors, and penalize them. However, I'd still expect a well built model to adjust so that if someone was a CS major, it would adjust accordingly and get rid of any default penalty for being at a particular college.

If the whole thing was hand-engineered, then of course all bets are off. It's hard to deal well with unbalanced classes, and as you mentioned, without knowing what their data looks like we can only speculate on what really happened.

But I will say this: this is not a general failure of ML, these sorts of problems can be avoided if you know what you're doing, unless your data is garbage.

discuss

order

lalaland1125|7 years ago

> It wouldn't do this just because there were fewer women in the batches, rather it would just not assign any weight to those factors.

That's exactly the issue we are talking about here. Woman's colleges would have less training data so they would get updated less. For many classes of models (such as neural networks with weight decay or common initialization schemes) this would encourage the model to be more "neutral" about women and assign predictions closer to 0.5 for them. This might not affect the overall accuracy for women (as it might not influence whether or not they go above or below 0.5), but it would cause the predictions for women to be less confident and thus have a lower ranking (closer to the middle of the pack as opposed to the top).

ewjordan|7 years ago

I don't think I'm with you. A neural net cannot do this - picking apart male and female tokens requires a signal in the gradients that force the two classes apart. If there's no gradient, then something like weight decay will just zero out the weights for the "gender" feature, even if it's there to begin with. Confidence wouldn't enter in, because the feature is irrelevant to the loss function.

A class imbalance doesn't change that: if there's no gradient to follow, then the class in question will be strictly ignored unless you've somehow forced the model to pay attention to it in the architecture (which is possible, but would take some specific effort).

What I'm suggesting is that it's likely that they did (perhaps accidentally?) let a loss gradient between the classes slip into their data, because they had a whole bunch of female resumes that were from people not in tech. That would explain the difference, whereas at least with NNs, simply having imbalanced classes would not.