(no title)
ospohngellert | 4 years ago
You said: "an interesting opportunity for someone to skip implementation of anti bias and potentially end up with a more effective model."
Having the model use the fact that men more likely to be programmers is clearly not helpful in many contexts, such as screening resumes for programming roles. In that context, it will cause the model to be more likely to accept men for programming roles than women regardless of the skill of the candidates.
Edit: Edited for clarity
robbedpeter|4 years ago
The whole scenario is contrived and not relevant to the functionality of these language models. It's like complaining that your Formula 1 car doesn't have a snowplow mount. Even if you add one, that's not how you should be using the tool.
The models use human generated text. They model human biases, like preferences for well being, humor, racism, sexism, and intelligence or ignorance. The ability to generate biased output is also the ability to recognize bias. It's up to the prompt engineer to develop a methodology that selects against bias.
You can use prompts to review the output - is this answer biased? Sexist? Racist? Hurtful? Shallow? Create a set of 100 questions that methodically seek potential bias and negative affect, and you could well arrive at output that is more rigorously fair and explained than most humans could accomplish in the casual execution of whatever task you're automating.
Zero-shot inference is a starting point - much the same way people shouldn't blurt out whatever first leaps to mind, meaningful output will require multiple passes.
ospohngellert|4 years ago