(no title)
ryankrage77 | 7 months ago
LLMs will never be 100% reliable by their very nature, so the obvious solution is to limit what their output can affect. This is already standard practice for many forms of user input.
A lot of these failures seem to be by people hyped about LLMs, anthropomorphising and thus being overconfident in them (blaming the hammer for hitting your thumb).
No comments yet.