top | item 39535256

(no title)

hemogloben | 2 years ago

>So if LLMs did the same (i.e. purposefully distorted facts and historical events due arbitrary and political reasons) it would also be acceptable?

You're talking as if there is a way to get an 'unbiased' AI. There isn't. It is inherently biased by its training, it hallucinates, and it is further biased by its prompt.

The whole endeavor is to bias it.

I'd prefer that AI be labelled on the tin for what it's biases are attempting to do, and promote diversity and deter abuse seems like a perfectly reasonable metric to use.

If that's not good for you fine, but you can't pretend that you're utterly baffled why they would make that choice over any other.

There literally ISN'T a way to not have a biasing prompt.

discuss

order

ffgjgf1|2 years ago

> You're talking as if there is a way to get an 'unbiased' AI. There isn't. It is inherently biased by its training, it hallucinates, and it is further biased by its prompt.

Certainly. Doesn’t mean that we still shouldn’t prioritize accuracy and integrity instead of purposefully increasing the amount of bias even further.

> you're utterly baffled why they would make that choice over any other.

I’m not. I’m baffled that there are people defending that choice (especially in such a way)

> and promote diversity

Why? I mean why do you think this is the right way to do it? Surely going out of your way to make sure that your model does its best to doctor the images it creates to conform to some political agenda (whatever that might be) would achieve the opposite because it actually legitimizes the things the other side is constantly saying? (and due to the potentially severe backlash from more moderate fraction of the society)