Is it concerning to anyone else that the "Simple & Reliable" and "Reliable on Longer Tasks" diagrams look kind of like the much maligned waterfall design process?
I am mostly worried that I am wrong, in my opinion, that "agents" is a bad paradigm for working with LLMs
I have been using LLMs since I got my first Open AI API key, I think "human in the loop" is what makes them special
I have massively increased my fun, and significantly increased my productivity using just the raw chat interface.
It seems to me that building agents to do work that I am responsible for is the opposite of fun and a productivity sink as I correct the rare, but must check for it, bananas mistakes these agents inevitably make
The thing is, the same agent that made the bananas mistake is also quite good at catching that mistake (if called again with fresh context). This results in convergence on working, non-bananas solutions.
worik|6 months ago
I am mostly worried that I am wrong, in my opinion, that "agents" is a bad paradigm for working with LLMs
I have been using LLMs since I got my first Open AI API key, I think "human in the loop" is what makes them special
I have massively increased my fun, and significantly increased my productivity using just the raw chat interface.
It seems to me that building agents to do work that I am responsible for is the opposite of fun and a productivity sink as I correct the rare, but must check for it, bananas mistakes these agents inevitably make
adastra22|6 months ago
CuriouslyC|6 months ago
amelius|6 months ago
DarkNova6|6 months ago