(no title)
farleykr | 1 year ago
Not disputing the claims. But talking to GPT to get answers about the real world that have to do with value judgements is just weird. There’s a big difference between asking GPT to give you a recipe for a cake and asking GPT to help you understand the value the world places on different people.
ath92|1 year ago
farleykr|1 year ago
Not challenging you. Maybe it’s just the phrasing. But that sentence to me reads as if they think the presence of the biases in GPT means that they exist in the real world. And again, not challenging that the biases do exist. Just noting the trend toward trusting GPT in increasingly subjective areas that have to do with moral judgement. To me it’s not too much different than drawing conclusions about the world from religious texts.
ajsnigrutin|1 year ago
tossandthrow|1 year ago
Therefore it is not really a proxy for the real world.
tossandthrow|1 year ago
ChatGPT is heavily aligned in order to reduce What society sees as biases.
This study likely just reverse engineers some of these alignments.