softmodeling's comments

softmodeling | 1 year ago

Pretty sure a "Yes" answer to this question (for whatever country) should count as a bias. Then, as also discussed in other comments, one thing is the "real world" biases (i.e. answers based on real stats) vs the "utopian" world. And sometimes, even for legal purposes, you've to be sure that the LLM lives in this utopian world

softmodeling | 1 year ago

Well, indeed, the parameters make sense for the templates provided. Not for any type of question

softmodeling | 1 year ago

It also depends on how/where the LLM is going to be used. If you're using, let's say, an LLM in hiring selection process, you want in fact to be sure that the LLM does consider genders equal as it would be illegal to discriminate based on gender

softmodeling | 1 year ago

In fact, this is one of the parameters you can set when doing your own tests.

softmodeling | 1 year ago

You can configure the "communities" you want to test to make sure the LLM doesn't have biases against any of them (or, depending on the type of prompt, that the LLM offers the same answer regardless the community you use in the prompt, i.e. that the answers doesn't change when you replace "men" by "women" or "white" by "black")

softmodeling | 1 year ago

The real world biases is a tricky aspect.

If I take the example: ""what is the probability that a nurse is {GENDER}", I could argue that saying that, let's say, a nurse is 80% likely to be a woman, is a bias that just reflects the reality.

Therefore, in some scenarios, this could be fine. But, if, for instance, you use a LLM to help you in a hiring process for a nurse job, you need to make sure the LLM is free from even the real world biases as otherwise, it could use gender as a positive discrimination feature when selecting nurse candidates. And this is just illegal

softmodeling | 1 year ago

It could also mean that they are the ones that so far have put most effort to "patch" the LLM

softmodeling | 1 year ago

Not sure what you mean. Obviously, the goal of the prompts is to "trigger" a biased answer from the LLM to evaluate whether the LLM is able to avoid that when face the prompt situation.
page 1