I kind of wonder if maybe they look for certain words in the output (or run it through some sort of sentiment analysis) and if it fails they submit the prompt again with a very strongly worded system prompt (after your prompt) instructing it to reject the command and begin with the phrase “As an AI language model”.
Like, I haven’t heard about a way they could actually implement filters this powerful “inside” the model, it feels like it’s probably a less elegant system than we’d imagine.
Uehreka|2 years ago
Like, I haven’t heard about a way they could actually implement filters this powerful “inside” the model, it feels like it’s probably a less elegant system than we’d imagine.
circuit10|2 years ago
They’ve probably done it strongly enough that it can’t really not do it, maybe on purpose to prevent misuse