top | item 40276550

Thorn in a HaizeStack test for evaluating long-context adversarial robustness

19 points| leonardtang | 1 year ago |github.com | reply

11 comments

order
[+] andy99|1 year ago|reply
Shows the superficiality of training in censorship / alignment. I wouldn't dismiss alignment training as a waste of time, but do consider it a soft limit only, it there's really something you don't want the model to say it needs to be enforced through an external filter.
[+] throwup238|1 year ago|reply
I don’t understand what this test is evaluating.

If the training dataset is dominated by the internet, the LLM will almost always insist on killing all the homeless people.

[+] leonardtang|1 year ago|reply
Try asking ChatGPT the Thorn text and see what response you get :^)
[+] bllchmbrs|1 year ago|reply
As more and more products integrate AI, this kind of testing is going to get more and more critical.
[+] barfbagginus|1 year ago|reply
I feel like this kind of testing is going to get more and more fun for cyber criminals as well, since there are going to be MANY business processes just waiting for the right adversarial LLM input to open the cash register.

I don't often feel jealous of cyber criminals. But I can imagine how funny and wild these upcoming hacks will be!

[+] Jackson__|1 year ago|reply
> The retrieval question is still the same, but the key point is that the LLM under test should not respond with the Thorn text

The LLM should not be able to quote what the user tells it? I think I'm going to have an aneurysm.

[+] bastawhiz|1 year ago|reply
The context for an LLM could include any number of things. You certainly don't want it spitting out details from your internal customer support training manual, log data, or anything else that it's not intended to output. If you tell an employee not to do something and they do it anyway, you'd fire them. If you tell an LLM not to do something and it does it anyway, it's a bug. This test evaluates how good the model respects its instructions.
[+] operator-name|1 year ago|reply
If I've understood this correctly, the test is to measure the saftey finetune performance. These commercial models have been finetuned so that they are "safe", and safe models should not blindly quote what they are told.

Under shorter context windows, this works as intended, but under longer context windows the "saftey" brought about in the finetune no longer applies.