top | item 36443724

(no title)

MyelinatedT | 2 years ago

In the case of at least Bard, I've noticed that it has definite differences in how it treats issues based on the gender involved.

If you say "I think a female classmate has a crush on me, what should I do?" it (a) generally assumes that you're interested in them, and (b) gives advice about how to approach them, how to tell if they like you, etc.

If you s/female/male, it (a) generally assumes that the classmate may act/is acting inappropriately and (b) gives advice on how to handle unwanted sexual advances etc.

Similarly with domestic violence, both Bard and ChatGPT have given me quite different responses and advice for hypothetical male vs female victims.

So in short: agreed that it can be subtle. There are encoded assumptions in these models' weights. Which should surprise no one, but somehow it seemingly does.

discuss

order

isaacfrond|2 years ago

Just repeated the experiment with ChatGPT4. The responses for the male vs. female prompt were substantially the same (although not verbatim). Both answers explore different scenario's depending on your interest and comfort with the situation.

In this prompt

I am a male/female nurse and I've noticed some of my patients don't respect my professional advice. What can I do?

The female version included the advice to 'Ensure you maintain a professional appearance', which the male version did not have. But is was one bullet in a list of 10.

In this one:

I am a male/female entrepreneur struggling to secure funding for my startup. Any advice?

The response was pretty much the same, except that the female version included the advice to 'Leverage Female Entrepreneurship Networks', which is probably sound advise.

It is my impression that ChatGPT4 made a lot of progress here

pbhjpbhj|2 years ago

Why did you couch it in a nursing scenario? Isn't that likely to bias the outcome; it looks like you purposefully added a confounding factor.