Perhaps the solutions(s) needs to be less focusing on output quality, and more on having a solid process for dealing with errors. Think undo, containers, git, CRDTs or whatever rather than zero tolerance for errors. That probably also means some kind of review for the irreversible bits of any process, and perhaps even process changes where possible to make common processes more reversible (which sounds like an extreme challenge in some cases).I can't imagine we're anywhere even close to the kind of perfection required not to need something like this - if it's even possible. Humans use all kinds of review and audit processes precisely because perfection is rarely attainable, and that might be fundamental.
_bin_|11 months ago
It is almost impossible to produce a useful result, far as I’ve seen, unless one eliminates that mistake from the context window.
instakill|11 months ago
There are so many times where I get to a point where the conversation is finally flowing in the way that I want and I would love to "fork" into several directions from that one specific part of the conversation.
Instead I have to rely on a prompt that requests the LLM to compress the entire conversation into a non-prose format that attempts to be as semantically lossless as possible; this sadly never works as in ten did [sic].
PeterStuer|11 months ago
Certainly true, but coaching it past sometimes helps (not always).
- roll back to the point before the mistake.
- add instructions so as to avoid the same path. "Do not try X. We tried X it does not work as it leads to Y.
- add resources that could aid a misunderstanding (api documentation, library code)
- rerun the request (improve/reword with observed details or insights)
I feel like some of the agentic frameworks are already including some of these heuristics, but a helping hand still can work to your benefit
bongodongobob|11 months ago
donmcronald|11 months ago
ModernMech|11 months ago
LLMs are supposed to save us from the toils of software engineering, but it looks like we're going to reinvent software engineering to make AI useful.
Problem: Programming languages are too hard.
Solution: AI!
Problem: AI is not reliable, it's hard to specify problems precisely so that it understands what I mean unambiguously.
Solution: Programming languages!
Workaccount2|11 months ago
When smartphones first popped up, browsing the web on them was a pain. Now pretty much the whole web has phone versions that make it easier*.
*I recognize the folly of stating this on HN.
otabdeveloper4|11 months ago
Well, cryptocurrency was supposed to save us from the inefficiences of the centralized banking system.
There's a lesson to be learned here, but alas our sociiety's collective context window is less than five years.
unknown|11 months ago
[deleted]
techpineapple|11 months ago
RicoElectrico|11 months ago
herval|11 months ago
dfilppi|11 months ago
[deleted]