top | item 44655250

(no title)

ryankrage77 | 7 months ago

I think a lot of these issues could be worked around by having the working state backed up after each step (e.g, make a git commit or similar). The LLM should not have any information about this backup process in its context or access to it, so it can't 'get confused' or mess with it.

LLMs will never be 100% reliable by their very nature, so the obvious solution is to limit what their output can affect. This is already standard practice for many forms of user input.

A lot of these failures seem to be by people hyped about LLMs, anthropomorphising and thus being overconfident in them (blaming the hammer for hitting your thumb).

discuss

order

No comments yet.