top | item 47142444

Show HN: Cost per Outcome for AI Workflows

4 points| deborahjacob | 6 days ago |github.com

4 comments

order

deborahjacob|6 days ago

Hi HN, I’m the technical founder of botanu (www.botanu.ai)

I started building this after repeatedly hitting the same problem on AI teams: we could see total LLM spend, but couldn’t answer “what did one successful outcome actually cost?”. In real systems, a single business event often requires multiple runs ex-retries, fallbacks, escalations, async workers etc., before it reaches a final outcome. Most tooling tracks individual calls, or at best single runs. That hides the true cost. botanu treats cost per outcome as the sum of all runs and attempts for an event, including failures.

How it works -An event represents business intent

-Each attempt is a run, with its own run_id

-All runs are linked via a shared event_id

-A single outcome (success / failure / partial) is emitted for the event

-Total cost = cost of all runs for that event

-Run context propagates across services using standard W3C Baggage (OpenTelemetry).

I’m building this as part of a broader effort around outcome-based pricing for AI systems and understanding true cost per outcome. If you’re thinking about similar problems, I’d love to chat and compare notes. Happy to answer technical questions or get critical feedback. Email- deborah@botanu.ai

opsmeter|5 days ago

“Cost per outcome” is the metric most teams actually need. In prod we saw totals look fine while cost/outcome drifted due to retries + fallback paths + context creep. Are you planning a before/after deploy comparison (prompt/version) to catch regressions, or anomaly alerts on cost/outcome slope?

deborahjacob|2 days ago

Yes, we are building both