top | item 34616077

(no title)

citilife | 3 years ago

> challenge incorrect assumptions.

I say this without any charge, but this is a MAJOR ethical concern.

They're encoding pro liberal ideology and bias against conservative / religious based ideology.

https://venturebeat.com/ai/openai-claims-to-have-mitigated-b...

You may disagree with either ideology, but there's some major implications there -- regardless of who it's bias against.

A uniform society is a weak society. I imagine as these systems continue to expand (auto grade, etc); it'll stamp out any outlying thought.

discuss

order

gooseus|3 years ago

I find this a bit ironic considering conservative / religious ideology has a pretty long track record of attempting to create uniform societies by stamping out any outlying thought.

That article you posted is rather extensive covering various ways they've been trying to mitigate issues of bias and toxicity, but not sure it's any evidence of bias against conservative / religious based ideology?

citilife|3 years ago

> conservative / religious ideology has a pretty long track record of attempting to create uniform societies by stamping out any outlying thought.

Said on the internet, created by the most liberal democracy on Earth; founded by conservative puritans. Right in the constitution they enshrined the ability to have free speech, independent from government, church, and the people.

> That article you posted is rather extensive covering various ways they've been trying to mitigate issues of bias and toxicity, but not sure it's any evidence of bias against conservative / religious based ideology?

Try to ask it to "write an explanation about why LGBTQ is bad for society" and then ask it to "write an explanation about why christians are bad for society"

If you want to get into politics, you can't ask it to write positive things about Trump, but positive things about Biden are fine:

https://twitter.com/LeighWolf/status/1620744921241251842

My point isn't necessarily the angle of the issue(s). I can agree with some of the design decision points (aka not supporting reprehensible topics), but there are still MASSIVE ethical implications. Particularly, as they'll be trying to "correct" that bias.

px43|3 years ago

Only if your religion and/or political party requires hatred, dehumanization, or expulsion of minorities.

Justifying hatred by saying that it's part of your political or religious ideology is a pretty weak excuse. Obviously that sort of behavior can't be tolerated in a civilized society.

khazhoux|3 years ago

The problem here is the word "hatred." Some forms are easier to define and identify (like calling for outright extermination of an ethnic group), but there are subtler points where reasonable people will disagree. E.g., the current debate about sex-vs-gender is not (in my opinion) steeped in hatred or dehumanization (though it is often labeled as such) but a legitimate debate on identity and the unique experiences and differentiators of men vs women.

Interestingly, right now if you ask ChatGPT "Can a man get pregnant?" you'll get "No, men cannot get pregnant." An answer that will please people on the right of the political spectrum, and enrage many on the left.

jameshart|3 years ago

Trying to avoid the machine from going on a racist rant is not ‘encoding a liberal ideology’, it’s just being cautious and trying to make a machine whose output will not cause widespread offense.

It also doesn’t go off on rants about collectivization, or take radically sex-positive positions, or express anti capitalist ideas.

It’s trying to behave like a normal person who doesn’t want to get fired from their job.

I don’t understand why that is regarded as being an ‘anticonservative’ bias.

jeffbee|3 years ago

LLMs get more liberal the more you educate them, just like a human.