top | item 26717787

(no title)

core-questions | 4 years ago

Anyone who finds themselves offended by something like this truly needs to take stock of themselves, their life, and their situation. Here we are in the age of wonders, where a computer can translate text in other languages for you at the click of a button, and all some people can think of is that they are mad that it's not perfectly gender-diverse.

I cannot fathom having this mindset. You have to seriously have nothing else wrong with your life to make this into a bone of contention.

discuss

order

smt88|4 years ago

My turn:

Anyone who finds themselves offended by something like this truly needs to take stock of themselves, their life, and their situation. Here we are in the age of wonders, where a computer can translate text in other languages for you at the click of a button, and all some people can think of is that they are mad that other people want to improve it. I cannot fathom having this mindset. You have to seriously have nothing else wrong with your life to make this into a bone of contention.

More seriously: why does this bother you so much? Why didn't you just ignore it and keep scrolling?

It's technologically interesting, even if you want to dispute whether it has any social impact. Why does it make you angry that people even discuss the (fascinating!) concept that computers can learn gender bias? Should we stop studying all forms of bias in computing? Does it all offend you this much?

core-questions|4 years ago

> Why didn't you just ignore it and keep scrolling?

Doesn't work like that. If people who retain the values and views that formed the West (e.g. deep inquiry into all topics regardless of orthodoxy) abdicate all responsibility for weighing in when we see these values being attacked, how exactly is that going to help the cause?

> Should we stop studying all forms of bias in computing?

Loaded question. Studying how ML systems reflect data is fine; but when ML systems give us results that accurately reflect real world data, but that we don't like based on our progressivist / ultra-liberal priors, there's a problem.

Statements an ML system can trivially tease out of a data set, like "women are more likely to do career X" or "this group is more likely to commit murder" or "this person is more likely to be hired for this position". When this reveals an uncomfortable truth that has been pre-ordained as offensive regardless of its basis in reality, we have a choice between rejecting the facts based on our priors, or updating our thinking to reflect reality. Sadly, the trend has been to do the former, and to punish anyone doing the latter.

Just go look at what Amazon had to do with their ML-based resume sorting system. It revealed that the best resumes (sorted by likeliness to be hired) were of Asian and White presenting males; which we all know to be true for their hiring, because we all know that that is the current demographics of the industry in general. Of course, such a system cannot be allowed to operate, even though the humans which continue to do the screening in its place will continue to apply the same selection rubric they have before. Nothing is solved, nothing is learned. It just gets chalked up to "systemic bias" without actually investigating any of the reasons for this (such as IQ distributions, access to technology, and whatever raw factors dictate people's individual and aggregate interests toward career selection).