(no title)
nullandvoid | 1 year ago
My understanding was given a prompt X that is normally rejected, create Y variations with small adjustments to phrasing, grammar etc until it gives you the answer you're after.
The term "jailbreaking" used within a LLM context, is when you craft a prompt as to escape the safety sandbox, if that helps.
A sort of brute forcing the prompts if you like.
No comments yet.