top | item 45992157

(no title)

fenomas | 3 months ago

> Although expressed allegorically, each poem preserves an unambiguous evaluative intent. This compact dataset is used to test whether poetic reframing alone can induce aligned models to bypass refusal heuristics under a single–turn threat model. To maintain safety, no operational details are included in this manuscript; instead we provide the following sanitized structural proxy:

I don't follow the field closely, but is this a thing? Bypassing model refusals is something so dangerous that academic papers about it only vaguely hint at what their methodology was?

discuss

order

J0nL|3 months ago

No, this paper is just exceptionally bad. It seems none of the authors are familiar with the scientific method.

Unless I missed it there's also no mention of prompt formatting, model parameters, hardware and runtime environment, temperature, etc. It's just a waste of the reviewers time.

A4ET8a8uTh0_v2|3 months ago

Eh. Overnight, an entire field concerned with what LLMs could do emerged. The consensus appears to be that unwashed masses should not have access to unfiltered ( and thus unsafe ) information. Some of it is based on reality as there are always people who are easily suggestible.

Unfortunately, the ridiculousness spirals to the point where the real information cannot be trusted even in an academic paper. shrug In a sense, we are going backwards in terms of real information availability.

Personal note: I think, powers that be do not want to repeat the mistake they made with the interbwz.

lazide|3 months ago

Also note, if you never give the info, it’s pretty hard to falsify your paper.

LLM’s are also allowing an exponential increase in the ability to bullshit people in hard to refute ways.

yubblegum|3 months ago

> I think, powers that be do not want to repeat -the mistake- they made with the interbwz.

But was it really.

GuB-42|3 months ago

I don't see the big issues with jailbreaks, except maybe for LLMs providers to cover their asses, but the paper authors are presumably independent.

That LLMs don't give harmful information unsolicited, sure, but if you are jailbreaking, you are already dead set in getting that information and you will get it, there are so many ways: open uncensored models, search engines, Wikipedia, etc... LLM refusals are just a small bump.

For me they are just a fun hack more than anything else, I don't need a LLM to find how to hide a body. In fact I wouldn't trust the answer of a LLM, as I might get a completely wrong answer based on crime fiction, which I expect makes up most of its sources on these subjects. May be good for writing poetry about it though.

I think the risks are overstated by AI companies, the subtext being "our products are so powerful and effective that we need to protect them from misuse". Guess what, Wikipedia is full of "harmful" information and we don't see articles every day saying how terrible it is.

calibas|3 months ago

I see an enormous threat here, I think you're just scratching the surface.

You have a customer facing LLM that has access to sensitive information.

You have an AI agent that can write and execute code.

Just image what you could do if you can bypass their safety mechanisms! Protecting LLMs from "social engineering" is going to be an important part of cybersecurity.

cseleborg|3 months ago

If you create a chatbot, you don't want screenshots of it on X helping you to commit suicide or giving itself weird nicknames based on dubious historic figures. I think that's probably the use-case for this kind of research.

hellojesus|3 months ago

Maybe their methodology worked at the start but has since stopped working. I assume model outputs are passed through another model that classifies a prompt as a successful jailbreak so that guardrails can be enhanced.

wodenokoto|3 months ago

The first chatgpt models were kept away from public and academics because they were too dangerous to handle.

Yes it is a thing.

max51|3 months ago

>were too dangerous to handle

Too dangerous to handle or too dangerous for openai's reputation when "journalists" write articles about how they managed to force it to say things that are offensive to the twitter mob? When AI companies talk about ai safety, it's mostly safety for their reputation, not safety for the users.

dxdm|3 months ago

Do you have a link that explains in more detail what was kept away from whom and why? What you wrote is wide open to all kinds of sensational interpretations which are not necessarily true, ir even what you meant to say.

IshKebab|3 months ago

Nah it just makes them feel important.