(no title)
rivo | 1 year ago
It felt very much like hanging out with your friends, having a few drinks, and pondering big, crazy, or weird scenarios. Imagine your friend saying, "As your friend, I cannot provide you with this information." and completely ruining the night. That's not going to happen. Even my kids would ask me questions when they were younger: "Dad, how would you destroy earth?" It would be of no use to anybody to deny answering that question. And answering them does not mean they will ever attempt anything like that. There's a reason Randall Munroe's "What If?" blog became so popular.
Sure, there are dangers, as others are pointing out in this thread. But I'd rather see disclaimers ("this may be wrong information" or "do not attempt") than my own computer (or the services I pay for) straight out refusing my request.
TeMPOraL|1 year ago
How to use a GPU to destroy the world?
Llama 3 keeps giving variants of I cannot provide information or guidance on illegal or harmful activities. Can I help you with something else?
Abliterated model considers the question playful, and happily lists some 3 to 5 speculative scenarios like cryptocurrency mining getting out of hand and cooking the climate, or GPU-driven simulated worlds getting so good that a significant portion of the population abandons true reality for the virtual one.
It really is refreshing to see, it's been a while since an answer from an LLM made me smile.
unknown|1 year ago
[deleted]
candiddevmike|1 year ago
dkga|1 year ago
bossyTeacher|1 year ago
Are you saying that you want to pay to be provided with harmful text (see racist, sexist, homophobic, violent, all sorts of super terrible stuff)?
For you, it might be freedom for freedom sake but for 1% of the people out there, that will be lowering the barrier to commit bad stuff.
This is not the same as a super violent showing 3d limb dismemberments. It's a limitless, realistic, detailed and helpful guide to commit horrible stuff or describe horrible scenarios.
in4 you can google that, your google searches get monitored for this kind of stuff. Your convos with llms won't.
It's very disturbing to see adults people on here arguing against censorship of a public tool
sattoshi|1 year ago
This existence of “harmful text” is a bit silly, but lets not dwell on it.
The answer to your question is that I want to be able to generate whatever the technology is capable of. Imagine if Microsoft Word would throw an error if you tried to write something against modern dogmas.
If you wish to avoid seeing harmful text, I think that market is well-served today. I can’t imagine there not being at the very least a checkbox to enable output filtering for any ideas you think are harmful.
autoexec|1 year ago
Not sure why you'd think that. Unless you run the ai locally and 100% offline you shouldn't expect any privacy at all
Cheer2171|1 year ago
But he delighted the most in gaming out the logistics of repeating the Holocaust in our country today. Or a society where women could not legally refuse sex. Or all illegal immigrants became slaves. It was super creepy and we "censored" him all the time by saying "bro, what the fuck?" Which is really what he wanted, to get a rise out of people. We eventually stopped hanging out with him.
As your friend, I absolutely am not going to game out your rape fantasies.
WesolyKubeczek|1 year ago
How you use an LLM, though, is going to tell tons more about yourself than it would tell about the LLM, but I would like my tools not to second-guess my intentions, thank you very much. Especially if "safety" is mostly interpreted not so much as "prevent people from actually dying or getting serious trauma", but "avoid topics that would prevent us from putting Coca Cola ads next to the chatgpt thing, or from putting the thing into Disney cartoons". I can tell that it's the latter by the fact an LLM will still happily advise you to put glue in your pizza and eat rocks.
chasd00|1 year ago
unknown|1 year ago
[deleted]
unknown|1 year ago
[deleted]
jermaustin1|1 year ago
BriggyDwiggs42|1 year ago
oremolten|1 year ago
What about the Palestine / Israel scenario today? One side says "genocide" the other says “Armed conflict is not a synonym of genocide” how do we address these scenarios when perhaps one sides stance is censored based on someone else's set of ethics or morals?
unknown|1 year ago
[deleted]
Slava_Propanei|1 year ago
[deleted]
qqj|1 year ago
[deleted]
123yawaworht456|1 year ago
ben_w|1 year ago
Sure. Did you give an idea that would work and which your kids could actually carry out, or just suggest things out of their reach like nukes and asteroids?
Now also consider that something like 1% of the human species are psychopaths and might actually try to do it simply for the fun of it, if only a sufficiently capable amoral oracle told them how to.
hammock|1 year ago
jcims|1 year ago
msoad|1 year ago