top | item 37573134

(no title)

wakahiu | 2 years ago

As the parent comment says, LLMs are very good at giving plausible answers. Without checking each cell, then the data you eventually output becomes suspect.

We've been trying to solve this problem at Leaptable (https://leaptable.co/). The crux is that while LLMs are still a black box, transparency in the way AI Agents interact with LLMs is key. For instance, seeing the outputs of each step in a chain-of-thought sequence helps debug common fallacies in the way the LLMs reason and built trust.

https://github.com/peterwnjenga/leaptable

discuss

order

No comments yet.