top | item 46901335

(no title)

thethimble | 24 days ago

This will absolutely help but to the extent that prompt injection remains an unsolved problem, an LLM can never conclusively determine whether a given skill is truly safe.

discuss

order

No comments yet.