top | item 42429875

(no title)

nullandvoid | 1 year ago

Seemed pretty simple to me and it's not my field.

My understanding was given a prompt X that is normally rejected, create Y variations with small adjustments to phrasing, grammar etc until it gives you the answer you're after.

The term "jailbreaking" used within a LLM context, is when you craft a prompt as to escape the safety sandbox, if that helps.

A sort of brute forcing the prompts if you like.

discuss

order

No comments yet.