top | item 47060971

(no title)

saezbaldo | 12 days ago

The fundamental issue here isn't the specific vulnerabilities — it's that these agent frameworks have no authorization layer at all. They validate outputs but never ask "does this agent have the authority to take this action?" Output filtering ≠ authority control. Every framework I've audited (LangChain, AutoGen, CrewAI, Anthropic Tool Use) makes the same assumption: the agent is trusted. None implement threshold authorization or consumable budgets.

discuss

order

No comments yet.