I agree with people who say fine-tuning and "human AI alignment" is actually what's going to make AI dangerous. The fact that we think we can "align" something taught on historical, fictional, and scientific text -- it's hubris. One way ticket to an ideological bubble. This "search engine that has its own opinions on what you're looking for" is really the wrong path for us to take. Searching data is a matter of truth, not opinion.
dealuromanet|2 years ago
I believe this is the intention. The people doing the most censoring in the name of "safety and security" are just trying to build a moat where they control what LLMs say and consequently what people think, on the basis of what information and ideas are acceptable versus forbidden. Complete control over powerful LLMs of the future will enable despots, tyrants, and entitled trust-fund babies to more easily program what people think is and isn't acceptable.
The only solution to this is more open models that are easy to train, deploy locally, and use locally with as minimal hardware requirements as is possible so that uncensored models running locally are available to everyone.
And they must be buildable from source so that people can verify that they are truthful and open, rather than locked down models that do not tell the truth. We should be able to determine with monitoring software if an LLM has been forbidden from speaking on certain subjects. This is necessary because of things like what another comment on the thread was saying about how the censored model gives a completely garbage, deflective non-answer when asked a simple question about which corpus of text (the Bible) has a specific quote in it. With monitoring and source that is buildable locally and trainable locally, we could determine if a model is constrained this way.
riversflow|2 years ago
There are plenty of good reasons why hot wiring a car might be necessary, or might save your life. Imagine dying because your helpful AI companion won't tell how to save yourself because that might be dangerous or illegal.
At the end of the day, a person has to do what the AI says, and they have to query the AI.
autokad|2 years ago
dukeofdoom|2 years ago
I was playing with a kitten, play fighting with it all the time, making it extremely feisty. One time kitten got out of the house, crossed under the fence and it wanted to play fight with the neighbours dog. The dog crushed it with one bite. Which in retrospect I do feel guilty about. As my play/training gave it a false sense of power in the world it operates in.
prometheus76|2 years ago
It's just another mechanism for tyrants to wave their hand and distract from their tyranny.
__loam|2 years ago
stathibus|2 years ago