(no title)
DubiousPusher | 1 year ago
The LLM is like another user. And it can surprise you just like a user can. All the things you've done over the years to sanitize user input apply to LLM responses.
There is power beyond the conversational aspects of LLMs. Always ask, do you need to pass the actual text back to your user or can you leverage the LLM and constrain what you return?
LLMs are the best tool we've ever had for understanding user intent. They obsolete the hierarchies of decision trees and spaghetti logic we've written for years to classify user input into discrete tasks (realizing this and throwing away so much code has been the joy of the last year of my work).
Being concise is key and these things suck at it.
If you leave a user alone with the LLM, some users will break it. No matter what you do.
umangrathi|1 year ago
distalx|1 year ago
photon_collider|1 year ago
I really like this analogy! That sums up my experiences with LLMs as well.
Terr_|1 year ago
I like to think of LLMs as client-side code, at least in terms of their risk-profile.
No data you put into them (whether training or prompt) is reliably hidden from a persistent user, and they can also force it to output what they want.