top | item 46092675

(no title)

adidoit | 3 months ago

Fascinating that the state-of-the-art in building agentic harnesses for long running agent workflows is to ... "use strong-worded instructions"

Anthropomorphism of LLMs is obviously flawed but remains the best way to actually build good Agents.

I do think this is one thing that will hold enterprise adoption back: can you really trust systems like these in production where the best control you can offer is that you're pleading with it to not do something?

Of course good engineering will build deterministic verification and scaffolds into prevent issues but it is a fundamental limitation of LLMs

discuss

order

No comments yet.