top | item 40914328

(no title)

farleykr | 1 year ago

What’s interesting to me is that (if I understand correctly) they’re using GPT as a source of information to make this claim. Usually you see conclusions like these being drawn from some sort of sociological study. But here they’re just talking to GPT and using its answers to determine things about the real world.

Not disputing the claims. But talking to GPT to get answers about the real world that have to do with value judgements is just weird. There’s a big difference between asking GPT to give you a recipe for a cake and asking GPT to help you understand the value the world places on different people.

discuss

order

ath92|1 year ago

The paper is about biases in GPT-4. They're not talking to GPT-4 to determine things about the real world. They're talking to GPT-4 to determine things about GPT-4.

farleykr|1 year ago

> This phenomenon likely reflects that while initiatives to integrate women in traditionally masculine roles have gained momentum, the reverse movement remains relatively under developed.

Not challenging you. Maybe it’s just the phrasing. But that sentence to me reads as if they think the presence of the biases in GPT means that they exist in the real world. And again, not challenging that the biases do exist. Just noting the trend toward trusting GPT in increasingly subjective areas that have to do with moral judgement. To me it’s not too much different than drawing conclusions about the world from religious texts.

ajsnigrutin|1 year ago

But GPT 'learns' from the world. GPT did not sit on the toilet and think about it out of the blue, GPT analyzed "the world" (well,... the written part on the internet) and came to these conclusions and biases.

tossandthrow|1 year ago

The OpenAI models are also aligned on top of their base training on order to have them behave in a certain way.

Therefore it is not really a proxy for the real world.

tossandthrow|1 year ago

Yes, this is not sound and constitutes severe critism of the work.

ChatGPT is heavily aligned in order to reduce What society sees as biases.

This study likely just reverse engineers some of these alignments.