top | item 36158541

(no title)

iliane5 | 2 years ago

AFAIK it's pretty standard practice not to expose the "raw" LLM directly to the user. You need a "sanity loop" where user input and the output of the LLM is checked by another LLM to actually enforce rules and mitigate prompt injections, etc.

discuss

order

No comments yet.