There is this ethical reasoning dataset to teach models stable and predictable values: https://huggingface.co/datasets/Bachstelze/ethical_coconot_6...
An Olmo-3-7B-Think model is adapted with it. In theory, it should yield better alignment. Yet the empirical evaluation is still a work in progress.
TuringTest|1 month ago
The model stores all the content on which it is trained in a compressed form. You can change the weights to make it more likely to show the content you ethically prefer; but all the immoral content is also there, and it can resurface with inputs that change the conditional probabilities.
That's why people can make commercial models to circumvent copyright, give instructions for creating drugs or weapons, encourage suicide... The model does not have anything resembling morals; for it all the text is the same, strings of characters that appear when following the generation process.
idiotsecant|1 month ago
pixl97|1 month ago
This is a pretty odd statement.
Lets take LLMs alone out of this statement and go with a GenAI style guided humanoid robot. It has language models to interpret your instructions, vision models to interpret the world. Mechanical models to guide its movement.
If you tell this robot to take a knife and cut onions, alignment means it isn't going to take the knife and chop of your wife.
If you're a business, you want a model aligned not to give company secrets.
If it's a health model, you want it to not give dangerous information, like conflicting drugs that could kill a person.
Our LLMs interact with society and their behaviors will fall under the social conventions of those societies. Much like humans LLMs will still have the bad information, but we can greatly reduce the probabilities they will show it.