mag123c | 1 month ago | on: Show HN: Toktrack – 1000x faster AI CLI cost tracker (Rust and SIMD)
mag123c's comments
mag123c | 1 month ago | on: Show HN: Toktrack – Track your Claude Code token spending in under a second
toktrack caches cost data independently either way, so past history is preserved regardless.
mag123c | 1 month ago | on: Show HN: Toktrack – Track your Claude Code token spending in under a second
My session files were ~3GB (2,000+ files). I first tried a Node.js approach but it took 40+ seconds – sequential JSON.parse, GC overhead, and libuv thread pool limits made it hard to optimize further.
Rewrote it in Rust with simd-json (SIMD-accelerated parsing) + rayon (parallel file processing). Cold start: ~1s, warm: ~0.04s.
Also supports Codex CLI and Gemini CLI.
Install: npx toktrack
mag123c | 1 month ago | on: Ask HN: How do you budget for token based AI APIs?
The actual speed comes from
1.simd-json: This is the big one. It uses CPU SIMD instructions to parse multiple JSON bytes in parallel at the hardware level. We're talking ~3 GiB/s vs ~300 MB/s with standard parsers.
2.rayon: Dead simple parallel processing. Instead of parsing 2,000 files one by one, it spreads them across all CPU cores.
3.Rust itself: No GC means no random pauses when you're crunching through gigabytes of data. The original Node.js version would just... freeze sometimes.
The 40s → 0.04s improvement basically "what if we actually used the hardware properly?" SIMD for parsing, all cores for parallelism, no GC getting in the way. (I should probably fix that README line - thanks for pointing it out!)