top | item 46781011

Ask HN: How do you budget for token based AI APIs?

1 points| Barathkanna | 1 month ago

The default norm today for using AI models via APIs is token based pricing, where you pay based on how much you use.

While this isn’t hard to understand, in practice it makes costs harder to predict, especially for small teams moving from experiments to early production. This feels less like a technical problem and more like a budgeting and planning problem.

I’m curious about alternative pricing abstractions, for example a subscription with unlimited tokens but a capped number of requests, aimed at making monthly spend easier to reason about while building.

For people running AI in production today, does token based billing give you enough predictability, or would a model like this actually reduce friction? What tradeoffs would matter most to you?

4 comments

order

mag123c|1 month ago

I built a local CLI tool to track my daily token usage across Claude Code, Codex, and Gemini CLI. Parsing the session logs directly (JSONL/JSON) with simd-json gives me exact numbers without relying on any external API. Knowing my actual spend per day changed how I use these tools — I was burning 3x more on cache misses than I realized.

storystarling|1 month ago

I found that the only way to get true fixed costs is renting the GPUs and self-hosting. The unlimited API plans usually come with strict rate limits or concurrency caps that make them unusable for production traffic. You basically have to choose between billing variance or taking on the devops overhead of managing your own instances.

Barathkanna|1 month ago

Agreed. Self-hosting gives the cleanest fixed cost, but you pay for it in ops and capacity planning. I’m mainly curious whether there’s a middle ground that gives early teams more predictable spend without immediately taking on full infra overhead.