Show HN: AgentGuard – Auto-kill AI agents before they burn through your budget
47 points| dipampaul17 | 7 months ago |github.com
AgentGuard monitors API calls in real-time and automatically kills your process when it hits your budget limit.
How it works:
Add 2 lines to any AI project:
const agentGuard = require('agent-guard');
await agentGuard.init({ limit: 50 }); // $50 budget
// Your existing code runs unchanged
const response = await openai.chat.completions.create({...});
// AgentGuard tracks costs automatically
When your code hits $50 in API costs, AgentGuard stops execution and shows you exactly what happened.Why I built this:
I got tired of seeing "I accidentally spent $500 on OpenAI" posts. Existing tools like tokencost help you measure costs after the fact, but nothing prevents runaway spending in real-time.
AgentGuard is essentially a circuit breaker for AI API costs. It's saved me from several costly bugs during development.
Limitations: Only works with OpenAI and Anthropic APIs currently. Cost calculations are estimates based on documented pricing.
Source: https://github.com/dipampaul17/AgentGuard
Install: npm i agent-guard
almost|7 months ago
If I was using something like this I think I'd rather have it wrap the AI API clients. Then it can throw an error if it doesn't recongise the client library I'm using. This way it'll just silently fail to monitor if what I'm using isn't in its supported list (whatever that is!)
I do think the idea is good though, just needs to be obvious how it will work when used and how/when it will fail.
yifanl|7 months ago
It's an... intrusive solution. Glad to hear it works for you though.
RedShift1|7 months ago
samsk|7 months ago
See https://docs.litellm.ai/docs/proxy/users
oc1|7 months ago
throwaway_ocr|7 months ago
stingraycharles|7 months ago
And if this is really a problem, why not funnel your AI agents through a proxy server which they all support instead of this hacky approach? It would be super easy to build a proxy server that keeps track of costs per day/session and just returns errors once you hit a limit.
unknown|7 months ago
[deleted]
hansmayer|7 months ago
The README now matches what developers actually experience: two lines of code, automatic tracking, no code changes needed."
Hey OP - next time perhaps at least write the commit messages yourself?
jeffhuys|7 months ago
[1] https://github.com/dipampaul17/AgentGuard/blob/51395c36809aa...
[2] https://github.com/dipampaul17/AgentGuard/commit/d49b361d7f3...
[3] https://github.com/dipampaul17/AgentGuard/blob/083ae9896459b...
diggan|7 months ago
It's kind of crazy that people use these multi-billion parameter machine learning models to do search/replace of words in text files, rather than the search/replace in their code editor. I wonder what the efficiency difference is, must be 1000x or even 10000x difference?
Don't get me wrong, I use LLMs too, but mostly for things I wouldn't be able to do myself (like isolated math-heavy functions I can't bother to understand the internals of), not for trivial things like changing "test" to "step" across five files.
I love that the commit ends with
> Codebase is now enterprise-ready with professional language throughout
Like "enterprise-ready" is about error messages and using "Examples" instead of "Demo".
eqvinox|7 months ago
StevenWaterman|7 months ago
delusional|7 months ago
> The foundation is bulletproof. Time to execute the 24-hour revenue sprint.
Comedy gold. This is one of those times where i cant figure out if the author is in on the joke, or if they're actually so deluded that they think this doesn't make them look idiotic. If it's the latter, we need to bring bullying back.
Either way it's hilarious.
can16358p|7 months ago
diggan|7 months ago
atemerev|7 months ago
bwfan123|7 months ago
While you are at it, use the term "guardrails" as that is quite fashionable.