top | item 44742710

Show HN: AgentGuard – Auto-kill AI agents before they burn through your budget

47 points| dipampaul17 | 7 months ago |github.com

Your AI agent hits an infinite loop and racks up $2000 in API charges overnight. This happens weekly to AI developers.

AgentGuard monitors API calls in real-time and automatically kills your process when it hits your budget limit.

How it works:

Add 2 lines to any AI project:

  const agentGuard = require('agent-guard');
  await agentGuard.init({ limit: 50 }); // $50 budget

  // Your existing code runs unchanged
  const response = await openai.chat.completions.create({...});
  // AgentGuard tracks costs automatically
When your code hits $50 in API costs, AgentGuard stops execution and shows you exactly what happened.

Why I built this:

I got tired of seeing "I accidentally spent $500 on OpenAI" posts. Existing tools like tokencost help you measure costs after the fact, but nothing prevents runaway spending in real-time.

AgentGuard is essentially a circuit breaker for AI API costs. It's saved me from several costly bugs during development.

Limitations: Only works with OpenAI and Anthropic APIs currently. Cost calculations are estimates based on documented pricing.

Source: https://github.com/dipampaul17/AgentGuard

Install: npm i agent-guard

26 comments

order

almost|7 months ago

So it monkey-patches a set of common http libraries and then detects calls to AI APIs? Not obvious which APIs it would detect or in what situations it would miss them. Seems kind of dangeorus to rely on something like that. You install it and it might be doing nothing, You only find out after somethings gone wrong.

If I was using something like this I think I'd rather have it wrap the AI API clients. Then it can throw an error if it doesn't recongise the client library I'm using. This way it'll just silently fail to monitor if what I'm using isn't in its supported list (whatever that is!)

I do think the idea is good though, just needs to be obvious how it will work when used and how/when it will fail.

yifanl|7 months ago

So this is essentially monkey-patching every variation of fetch/library fetch and doing math on the reported token counts?

It's an... intrusive solution. Glad to hear it works for you though.

RedShift1|7 months ago

Just be glad it's not AI based... :')

samsk|7 months ago

For this I use LiteLLM proxy - you can create virtual keys with daily/weekly/... budget, its pretty flexible and has a nice UI.

See https://docs.litellm.ai/docs/proxy/users

oc1|7 months ago

But do you use it locally? It seems to be more of a server-side product

throwaway_ocr|7 months ago

Wouldn't the obvious solution to this problem to stop using agents that don't respect your usage limits instead of trying to build sketchy containers around misbehaving software?

stingraycharles|7 months ago

Yeah I don’t understand this problem, who uses so many agents at the same time in the first place?

And if this is really a problem, why not funnel your AI agents through a proxy server which they all support instead of this hacky approach? It would be super easy to build a proxy server that keeps track of costs per day/session and just returns errors once you hit a limit.

hansmayer|7 months ago

"Commit 2ef776f dipampaul17 committed Jul 31, 2025 · Update READMEs: honest, clear, aesthetic - Removed pretentious language and marketing speak - Added real developer experience based on actual testing - Clear, direct explanations of what it actually does - Aesthetic improvements with better formatting - Accurate feature descriptions based on verified functionality - Honest about capabilities without overselling - Reflects the 30-second integration we tested

The README now matches what developers actually experience: two lines of code, automatic tracking, no code changes needed."

Hey OP - next time perhaps at least write the commit messages yourself?

jeffhuys|7 months ago

Honestly feels very vibe-coded [1] [2] and would not really trust my money with something like this. I had to read the code to understand what it actually protects me from, as the README.md (other than telling me it's production-ready, professional, and protects me from so much!) tells me "Supports all major providers: OpenAI, Anthropic, auto-detected from URLs". OpenAI and Anthropic are "all" major providers [3]?

[1] https://github.com/dipampaul17/AgentGuard/blob/51395c36809aa...

[2] https://github.com/dipampaul17/AgentGuard/commit/d49b361d7f3...

[3] https://github.com/dipampaul17/AgentGuard/blob/083ae9896459b...

diggan|7 months ago

> [1] https://github.com/dipampaul17/AgentGuard/commit/d49b361d7f3...

It's kind of crazy that people use these multi-billion parameter machine learning models to do search/replace of words in text files, rather than the search/replace in their code editor. I wonder what the efficiency difference is, must be 1000x or even 10000x difference?

Don't get me wrong, I use LLMs too, but mostly for things I wouldn't be able to do myself (like isolated math-heavy functions I can't bother to understand the internals of), not for trivial things like changing "test" to "step" across five files.

I love that the commit ends with

> Codebase is now enterprise-ready with professional language throughout

Like "enterprise-ready" is about error messages and using "Examples" instead of "Demo".

eqvinox|7 months ago

It's incredible how emoji overuse has become a giant red flag for AI over-/abuse.

StevenWaterman|7 months ago

The AI's idea of developing a startup is eerily reminiscent of a hacking scene in CSI

delusional|7 months ago

> Close first customers at $99/month

> The foundation is bulletproof. Time to execute the 24-hour revenue sprint.

Comedy gold. This is one of those times where i cant figure out if the author is in on the joke, or if they're actually so deluded that they think this doesn't make them look idiotic. If it's the latter, we need to bring bullying back.

Either way it's hilarious.

can16358p|7 months ago

I really wonder how much $$$ was burned while testing this against production.

diggan|7 months ago

I guess about $0.01 per test run, and maybe you run it 100 times (if even) so about $1?

atemerev|7 months ago

So that's how AGI will escape containment.

bwfan123|7 months ago

Expand it to "enterprise security solutions for agent deployments" and you will get VC funding for it. For any new technology, the playbook is create startups that do: "compliance", "governance", "security", "observability" around that technology. So, the big security companies can acquire said startup and add it as a feature to their existing products.

While you are at it, use the term "guardrails" as that is quite fashionable.