top | item 39173511

(no title)

127361 | 2 years ago

They've joined the Linux Foundation, does that mean the models are going to be eventually censored to satisfy the foundation's AI safety policies? That includes ensuring the models don't generate content that's non-inclusive or against diversity policies?

discuss

order

pico_creator|2 years ago

Currently the main policy is only around copyright - and nothing about AI safety: https://www.linuxfoundation.org/legal/generative-ai

Also in the full power of opensource, if LF really force something the group disagree with, we will just fork

All the other alignment policies, are optional for groups to opt-in

So I would not worry so much about that - the group already has a plan in event we need to leave the Linux Foundation - for example: If USA regulates AI training (since LF is registered under USA)

viraptor|2 years ago

Downvoted, because it's a very trolly way to ask this. Especially given the foundation doesn't have an AI safety policy from what I've seen. Let's be better than this...

lhl|2 years ago

It is trivial to fine tune any model (whether a base model or an aligned model) to your preferred output preferences as long as you have access to the model weights.

Al-Khwarizmi|2 years ago

Not trivial for the general public at all, and furthermore, you need much more memory for finetuning than for inference, often making it infeasible for many machine/model combinations.