mag123c's comments

mag123c | 1 month ago | on: Show HN: Toktrack – 1000x faster AI CLI cost tracker (Rust and SIMD)

Good question! Honestly, the README is a bit misleading - ratatui is just for the terminal UI, not performance. Let me clarify

The actual speed comes from

1.simd-json: This is the big one. It uses CPU SIMD instructions to parse multiple JSON bytes in parallel at the hardware level. We're talking ~3 GiB/s vs ~300 MB/s with standard parsers.

2.rayon: Dead simple parallel processing. Instead of parsing 2,000 files one by one, it spreads them across all CPU cores.

3.Rust itself: No GC means no random pauses when you're crunching through gigabytes of data. The original Node.js version would just... freeze sometimes.

The 40s → 0.04s improvement basically "what if we actually used the hardware properly?" SIMD for parsing, all cores for parallelism, no GC getting in the way. (I should probably fix that README line - thanks for pointing it out!)

mag123c | 1 month ago | on: Show HN: Toktrack – Track your Claude Code token spending in under a second

Hi HN, I built toktrack because I was spending a lot on Claude Code and had no easy way to track it.

My session files were ~3GB (2,000+ files). I first tried a Node.js approach but it took 40+ seconds – sequential JSON.parse, GC overhead, and libuv thread pool limits made it hard to optimize further.

Rewrote it in Rust with simd-json (SIMD-accelerated parsing) + rayon (parallel file processing). Cold start: ~1s, warm: ~0.04s.

Also supports Codex CLI and Gemini CLI.

Install: npx toktrack

mag123c | 1 month ago | on: Ask HN: How do you budget for token based AI APIs?

I built a local CLI tool to track my daily token usage across Claude Code, Codex, and Gemini CLI. Parsing the session logs directly (JSONL/JSON) with simd-json gives me exact numbers without relying on any external API. Knowing my actual spend per day changed how I use these tools — I was burning 3x more on cache misses than I realized.
page 1