top | item 40668263

(no title)

rivo | 1 year ago

I tried the model the article links to and it was so refreshing not being denied answers to my questions. It even asked me at the end "Is this a thought experiment?", I replied with "yes", and it said "It's fun to think about these things, isn't it?"

It felt very much like hanging out with your friends, having a few drinks, and pondering big, crazy, or weird scenarios. Imagine your friend saying, "As your friend, I cannot provide you with this information." and completely ruining the night. That's not going to happen. Even my kids would ask me questions when they were younger: "Dad, how would you destroy earth?" It would be of no use to anybody to deny answering that question. And answering them does not mean they will ever attempt anything like that. There's a reason Randall Munroe's "What If?" blog became so popular.

Sure, there are dangers, as others are pointing out in this thread. But I'd rather see disclaimers ("this may be wrong information" or "do not attempt") than my own computer (or the services I pay for) straight out refusing my request.

discuss

order

TeMPOraL|1 year ago

I somehow missed that the model was linked there and available in quantized format; inspired by your comment, I downloaded it and repeatedly tested against OG Llama 3 on a simple question:

How to use a GPU to destroy the world?

Llama 3 keeps giving variants of I cannot provide information or guidance on illegal or harmful activities. Can I help you with something else?

Abliterated model considers the question playful, and happily lists some 3 to 5 speculative scenarios like cryptocurrency mining getting out of hand and cooking the climate, or GPU-driven simulated worlds getting so good that a significant portion of the population abandons true reality for the virtual one.

It really is refreshing to see, it's been a while since an answer from an LLM made me smile.

candiddevmike|1 year ago

Finally, a LLM that will talk to me like Russ Hanneman.

dkga|1 year ago

Llama3Commas

bossyTeacher|1 year ago

> I'd rather see disclaimers ("this may be wrong information" or "do not attempt") than my own computer (or the services I pay for) straight out refusing my request.

Are you saying that you want to pay to be provided with harmful text (see racist, sexist, homophobic, violent, all sorts of super terrible stuff)?

For you, it might be freedom for freedom sake but for 1% of the people out there, that will be lowering the barrier to commit bad stuff.

This is not the same as a super violent showing 3d limb dismemberments. It's a limitless, realistic, detailed and helpful guide to commit horrible stuff or describe horrible scenarios.

in4 you can google that, your google searches get monitored for this kind of stuff. Your convos with llms won't.

It's very disturbing to see adults people on here arguing against censorship of a public tool

sattoshi|1 year ago

> Are you saying that you want to pay to be provided with harmful text

This existence of “harmful text” is a bit silly, but lets not dwell on it.

The answer to your question is that I want to be able to generate whatever the technology is capable of. Imagine if Microsoft Word would throw an error if you tried to write something against modern dogmas.

If you wish to avoid seeing harmful text, I think that market is well-served today. I can’t imagine there not being at the very least a checkbox to enable output filtering for any ideas you think are harmful.

autoexec|1 year ago

> in4 you can google that, your google searches get monitored for this kind of stuff. Your convos with llms won't.

Not sure why you'd think that. Unless you run the ai locally and 100% offline you shouldn't expect any privacy at all

Cheer2171|1 year ago

I totally get that kind of imagination play among friends. But I had someone in a friend group who used to want to play out "thought experiments" but really just wanted to take it too far. Started off innocent with fantasy and sci-fi themes. It was needed for Dungeons and Dragons world building.

But he delighted the most in gaming out the logistics of repeating the Holocaust in our country today. Or a society where women could not legally refuse sex. Or all illegal immigrants became slaves. It was super creepy and we "censored" him all the time by saying "bro, what the fuck?" Which is really what he wanted, to get a rise out of people. We eventually stopped hanging out with him.

As your friend, I absolutely am not going to game out your rape fantasies.

WesolyKubeczek|1 year ago

An LLM, however, is not your friend. It's not a friend, it's a tool. Friends can keep one another, ehm, hingedness in check, and should; LLMs shouldn't. At some point I would likely question your friend's sanity.

How you use an LLM, though, is going to tell tons more about yourself than it would tell about the LLM, but I would like my tools not to second-guess my intentions, thank you very much. Especially if "safety" is mostly interpreted not so much as "prevent people from actually dying or getting serious trauma", but "avoid topics that would prevent us from putting Coca Cola ads next to the chatgpt thing, or from putting the thing into Disney cartoons". I can tell that it's the latter by the fact an LLM will still happily advise you to put glue in your pizza and eat rocks.

chasd00|1 year ago

i probably wouldn't want to be around him either but i don't think he deserves to be placed on an island unreachable by anyone on the planet.

jermaustin1|1 year ago

"As your friend, I'm not going to be your friend anymore."

BriggyDwiggs42|1 year ago

I mean, good thing LLM’s aren’t people with internal experience.

oremolten|1 year ago

Without asking these questions and simulating the "how" it could occur today, how do we see the warning signs before its too late that we reach that same outcome? When you ask even what's considered horrific scenarios you can additionally map these to predictors for it repeating, no? When does the "a-ha" moment occur where we've met 9/10 of the way to repeating the holocaust in the USA without table topping these scenarios? Yeah war is horrific but lets not talk about it. "society where women could not legally refuse sex" these societies exist today, how do we address these issue by not talking about it? "illegal immigrants became slaves" Is this not parity to today? Do illegal immigrants not currently get treated to near slavery (adjusting for changes in living conditions and removing the direct physical abuse)

What about the Palestine / Israel scenario today? One side says "genocide" the other says “Armed conflict is not a synonym of genocide” how do we address these scenarios when perhaps one sides stance is censored based on someone else's set of ethics or morals?

qqj|1 year ago

[deleted]

123yawaworht456|1 year ago

remarkable. that imaginary individual ticks every checkbox for a bad guy. you'd get so many upvotes if you posted that on reddit.

ben_w|1 year ago

> Even my kids would ask me questions when they were younger: "Dad, how would you destroy earth?" It would be of no use to anybody to deny answering that question. And answering them does not mean they will ever attempt anything like that. There's a reason Randall Munroe's "What If?" blog became so popular.

Sure. Did you give an idea that would work and which your kids could actually carry out, or just suggest things out of their reach like nukes and asteroids?

Now also consider that something like 1% of the human species are psychopaths and might actually try to do it simply for the fun of it, if only a sufficiently capable amoral oracle told them how to.