(no title)
dongobread | 1 year ago
- Too many errors that just propogate on top of each other, if a single agent in the chain generates something even a little bit off then the whole system goes off the rails.
- You often end up having to pass a massive amount of shared context to every agent which just increases the cost dramatically.
Curiously enough we had an architect from OpenAI tell us the same thing about agent systems a few days ago (our company is a big spender so they serve a consulting function), so I don't think anybody is really finding success with multi-agent systems currently. IMO the core tech is nowhere near good enough yet.
pennomi|1 year ago
LLMs are like the perfect improv comedy troupe, they virtually always say “yes, and…”
echelon|1 year ago
Check out Vtubers like CodeMiko, who improvs against LLM agents. Or 24/7 streaming LLM cartoon shows that take audience plot suggestions.
lmeyerov|1 year ago
The ultimate answer is fairly short if you are a senior python data scientist, like 50loc. The agents will wander and iterate until they push through. You might correct & tweak if a bit off.
Importantly, this does agents opposite of the way Devin AI engineer replacements are presented. Here, you get it to do a few steps, and then move on to the next few steps. The agents still crank away a ton and do all sorts of clever things for you... to get you more reliably to the next step, vs something big & wrong.
arresin|1 year ago