top | item 47025435

(no title)

anthonypasq96 | 14 days ago

i agree, and its strange that this failure mode continually gets lumped onto AI. The whole point of longer term software engineering was to make it so that the context within a particular persons head should not impact the ability of a new employee to contribute to a codebase. turns out everything we do to make sure that is the case for a human also works for an agent.

As far as i can tell, the only reason AI agents currently fail is because they dont have access to the undocumented context inside of peoples heads and if we can just properly put that in text somehwere there will be no problems.

discuss

order

daveguy|14 days ago

The failure mode is getting lumped into AI because AI is a lot more likely to fail.

We've done this with Neural Networks v1, Expert Systems, Neural Networks v2, SVM, etc, etc. only a matter of time before we figured it out with deep neural networks. Clearly getting closer with every cycle, but no telling how many cycles we have left because there is no sound theoretical framework.

vidarh|14 days ago

At the same time, we have spent a large part of the existence of civilisation figuring out organisational structures and methods to create resilient processes using unreliable humans, and it turns out a lot of those methods also work on agents. People just often seem miffed that they have to apply them on computers too.