top | item 38654753

(no title)

Frivolous9421 | 2 years ago

And why is my computer not allowed to be racist if I instruct it to be racist?

discuss

order

bigfudge|2 years ago

If you want to make it racist buy your own array of a100s and start training.

Microsoft and others don’t want to facilitate that. I think it’s a reasonable concern. From a public policy perspective having millions of people who can instantly produce reams of racist abuse to swamp online fora is a problem, even if you believe in free speech. It potentially changes the power dynamic in favour of a racist minority to dominate public discourse.

Social proof and social learning have powerful effects on behaviour. LLMs are great, but used at scale they will have downsides too.

rapind|2 years ago

> having millions of people who can instantly produce reams of racist abuse to swamp online fora is a problem, even if you believe in free speech. It potentially changes the power dynamic in favour of a racist minority to dominate public discourse.

I kinda think when (not if) this happens it's going to be quickly drowned in all sorts of other AI generated garbage. In fact, we will probably render internet news useless pretty sure without a proper trust / source identification system.

AJ007|2 years ago

You don't need a bunch of a100s. The models you can download and run locally on a consumer GPU or Apple Silicon will say whatever you tell them to. You won't be getting GPT-4 level output (with some exceptions), but it's close.

wredue|2 years ago

You can download any number of models that will be freely racist for you.

Queries powered by Microsoft servers will be filtered.

Similarly, businesses are free to create their downloadable models however they so choose, and you are free to not use that model, preferring another one instead.

Frankly, it says a lot about the commenters here that the very first thing they test and judge a model on is whether or not they can force models to perform sensitive results.