If you want to make it racist buy your own array of a100s and start training.
Microsoft and others don’t want to facilitate that. I think it’s a reasonable concern. From a public policy perspective having millions of people who can instantly produce reams of racist abuse to swamp online fora is a problem, even if you believe in free speech. It potentially changes the power dynamic in favour of a racist minority to dominate public discourse.
Social proof and social learning have powerful effects on behaviour. LLMs are great, but used at scale they will have downsides too.
> having millions of people who can instantly produce reams of racist abuse to swamp online fora is a problem, even if you believe in free speech. It potentially changes the power dynamic in favour of a racist minority to dominate public discourse.
I kinda think when (not if) this happens it's going to be quickly drowned in all sorts of other AI generated garbage. In fact, we will probably render internet news useless pretty sure without a proper trust / source identification system.
You don't need a bunch of a100s. The models you can download and run locally on a consumer GPU or Apple Silicon will say whatever you tell them to. You won't be getting GPT-4 level output (with some exceptions), but it's close.
You can download any number of models that will be freely racist for you.
Queries powered by Microsoft servers will be filtered.
Similarly, businesses are free to create their downloadable models however they so choose, and you are free to not use that model, preferring another one instead.
Frankly, it says a lot about the commenters here that the very first thing they test and judge a model on is whether or not they can force models to perform sensitive results.
bigfudge|2 years ago
Microsoft and others don’t want to facilitate that. I think it’s a reasonable concern. From a public policy perspective having millions of people who can instantly produce reams of racist abuse to swamp online fora is a problem, even if you believe in free speech. It potentially changes the power dynamic in favour of a racist minority to dominate public discourse.
Social proof and social learning have powerful effects on behaviour. LLMs are great, but used at scale they will have downsides too.
rapind|2 years ago
I kinda think when (not if) this happens it's going to be quickly drowned in all sorts of other AI generated garbage. In fact, we will probably render internet news useless pretty sure without a proper trust / source identification system.
AJ007|2 years ago
wredue|2 years ago
Queries powered by Microsoft servers will be filtered.
Similarly, businesses are free to create their downloadable models however they so choose, and you are free to not use that model, preferring another one instead.
Frankly, it says a lot about the commenters here that the very first thing they test and judge a model on is whether or not they can force models to perform sensitive results.