(no title)
wiltonn | 1 year ago
This is more solvable from an engineering perspective if we don't take the approach that LLMs are a hammer and everything is a nail. The solution I think is along the lines of breaking the issue down into 2-3 problems: 1) Understand the intent of question, 2) Validating the data in resultset and 3) provide a signal to the user of the measure to which the result matches the intent of the intention.
LLMs work great to understand the intent of the request; To me this is the magic of LLM - when I ask, it understands what I'm looking for as opposed to google has no idea, here's a bunch of blue links - you go figure it out.
However, more validation of results is required. Before answers are returned, I want the result validated with a trusted source. Trust is a hard problem..and probably not in the purview of the LLM to solve. Trust means different things in different contexts. You trust a friend because they understand your worldview and they have your best interest in mind. Does an LLM do this? You trust a business because they have consistently delivered valuable services to their customers, leveraging proprietary, up-to-date knowledge acquired through their operations, which rely on having the latest and most accurate information as a competitive advantage. Descartes stores this mornings garbage truck routes for Boise IA in its route planning software - thats the only source I trust for Boise IA garbage truck routes. This, I believe is the purpose for tools, agents and function calling in LLMs, and APIs from Descartes.
But this trust needs to be signaled to the user in the LLM response. Some measure of the original intent against the quality of the response needs to be given back to the user so that its not just an illusion of the facts and knowledge, but a verified response that the user can critically evaluate as to whether it matches their intent.
No comments yet.