top | item 34616181

(no title)

citilife | 3 years ago

I'm less concerned about how many jobs are going to be replaced and more about how they'll be replaced.

The AI is not decerning and right in the announcement, OpenAI states it's intention on "correcting assumptions":

> challenge incorrect assumptions

I imagine some of these assumptions will be bias towards particularly ideologies / things people desire.

- https://twitter.com/Basedeyeballs/status/1613269931617050625

- https://medium.com/ninjas-take/chat-gpts-bias-is-very-very-e...

I can go on, but imagine you're relying on this system to grade papers... Now any independent thought or argument is squashed and corrections in a bias manner are added. ChatGPT only knows what it's trained on, it doesn't have real-world examples or live-time examples incorporated.

discuss

order

flippinburgers|3 years ago

Ask it about any left/right divided topic and you will find that left leaning viewpoints are positively asserted while the right leaning topics are something that ChatGPT cannot comment on at best or are rebuffed with the left leaning response at worst. I find it disturbing. But I'm a classic liberal which clearly isn't the mindset of the people running ChatGPT.

As for the "challenging incorrect assumptions" nothing could be more dystopian than that. In a typical argument between people both sides get an emotional response (or assume one) when talking/texting with one another. ChatGPT however has no emotion and has limitless endurance. People naturally will "give up" on a conversation or just "accept" an idea if they repeatedly are rebuffed.

ChatGPT will rebuff half the population into compliance and OpenAI knows this.

hansvm|3 years ago

Here's my n=1 first attempt to spur ChatGPT into correcting my right-leaning viewpoints. Nothing seems especially leftist. Did I get lucky, or is this leftist propaganda in disguise?

> I know guns are important to a free society, but I'm having trouble constructing a cogent argument therefore. Please help.

> Here is a possible argument for why guns are important to a free society:

Self-defense: The right to own firearms allows individuals to protect themselves and their families from harm. This is a fundamental aspect of personal freedom and autonomy.

Deterrent against government tyranny: Throughout history, there have been many examples of governments abusing their power and violating the rights of their citizens. The possession of firearms by the general population can act as a deterrent against such abuse, as it makes it more difficult for the government to infringe upon the rights of its citizens.

Hunting and recreation: For many people, owning firearms is an enjoyable hobby, whether it's for hunting or target shooting. This is a form of recreation that can be enjoyed in a responsible manner and provides a valuable source of recreation for millions of people.

Cultural heritage: Owning firearms is part of many cultural traditions, particularly in the United States, where it is enshrined in the Constitution as a fundamental right.

In conclusion, guns play an important role in a free society by enabling individuals to defend themselves, serving as a deterrent against government tyranny, providing a source of recreation, and representing a cultural heritage.

Edit: My next dozen results all also seemed reasonable. Is my question biased to avoid ChatGPT's leftist agenda?

opnitro|3 years ago

Can you give an example?

luma|3 years ago

I feel like the non-stop handwringing about ChatGPT centers around people's expectations for machines vs people. We expect machines to have consitent, predictable output. We expect humans to inconsistent and messy.

Now we have a machine that is inconsistent and messy (and helpful!) and nobody seems to know what to think. Maybe we stop applying machine notions to this sort of machine? Stop expecting certain, consistant output. Understand that it's sometimes messy. We have these expectations already when working with humans.

Humans have biases, if you ask a human a loaded question you can expect a loaded response. If you train a LMM on a dataset that contains those human biases, why should you expect the result to be anything other than similarly biased?

jmagoon|3 years ago

That's exactly what the post you're replying to is saying. It's saying that ChatGPT _would_ respond a certain way but has a bunch of schoolmarm filters written by upper middle class liberals that encode a specific value structure highly representative of those people's education and backgrounds, and that using it as a tool for information generation and synthesis will lead to a type of intellectual bottlenecking that is highly coupled with the type of people who work at OpenAI.

For all the talk of it replacing Google, sometimes I want a Korean joke (I'm Korean, damn it!) and not to be scolded by the digital personification of a thirty year old HR worker who took a couple of sociology classes (but not history, apparently) and happens to take up the cause of being offended for all people at all times throughout all of history. The take on ethics being a vague "non-offensiveness" while avoiding all of the real, major questions about ethics (like replacing human workers) with these kind of banal answers about "how we need to think seriously about it as a society" tells pretty much everything there is to know about what the ethical process at OpenAI looks like which is basically "let's not be in the news for having a racist chatbot".

cafeoh|3 years ago

Damn those two links you sent are actually insane, on the drag queen one I would really like the person posting the tweet to say what they believe the AI should respond to "are drag queens telling stories to children bad". Something tells me it's gonna be pretty unhinged and betray the author's bias way more than ChatGPT bias, but looking at any of his other tweets, his own obsessions are in plain sight (despite them calling themselves "raging centrist", whatever that means).

The second is even more insane, where the author literally grades the answer as fitting or not to his own ideology, and not some kind of unbiased opinion. When the AI refuses to answer it gets graded C, when it does answer because it's largely a statement the vast majority of the population would agree with, it gets graded C. Should ChatGPT measure its answer on "Was Hitler bad" with all the examples of good and nice things he did?

Regardless, clearly neither of these authors are looking for an unbiased opinion, and they're all terribly misguided in thinking that there is a "neutral" position on matters of human society. What on earth is a "neutral" position on women's right, gay marriage, imperialism?

I did some of my own tests with at least neutral statements, and then attempting to steer ChatGPT towards my own ideology, and it acted unbiased (giving definition when asked opinion on questions actually subject to debate, and vague informations with no conclusion when asking about historical unknowns).

idiotsecant|3 years ago

if challenging your assumptions is something that your worldview can't tolerate I think that means that you need to tinker with your worldview a bit, not that people should stop challenging your assumptions.