top | item 42879622

(no title)

connectsnk | 1 year ago

Thanks for your response. How do AI engineers ensure LLM output validation and safety measures

discuss

order

dtagames|1 year ago

That's a huge topic. The short answer is that you can't control the output of the LLM. The idea of RAG is that, by inspecting the output of the LLM, you can use it to trigger tools ("tool calling") that pull supposedly-correct data from the real world (like a database). That code which is not based on a model but is traditional programming code in a normal language, must be the arbiter of what is allowed in and out. The LLM is always statistical in output and never fully reliable or controllable.

A banking application is a good example. You might have a chatbox that allowed the customer to write "Transfer $1M by Zelle to Linda Smith." The LLM would probably return the correct tool, but your actual app would not transfer funds you don't have, thus providing the "safety."