top | item 47158332

(no title)

guld | 4 days ago

Interesting. Can anyone provide personal insights or benchmarks on how effective TOON compared to e.g., JSON or Markdown is (Codex, Claude, ...)?

discuss

order

verdverm|4 days ago

Ideas like this are bad ones. Words matter, you should put effort into them, minimization is not the primary optimization, don't let something like this MitM and change your hard work for the worse.

The reason people do custom is to craft very good instructions and tools, something a machine is not capable of

bojo|4 days ago

Perhaps? I just used it to analyze one of my 96k Zig codebases using Claude Code and here is (part of) what came back. (I snipped out the deeper analysis above as it exposes my private project - but it was all correct).

  Head-to-Head

  ┌──────────────┬─────────┬─────────────┬────────────┐
  │    Metric    │  Opty   │ Traditional │   Ratio    │
  ├──────────────┼─────────┼─────────────┼────────────┤
  │ Input tokens │ ~13,500 │ ~39,408     │ 2.9x fewer │
  ├──────────────┼─────────┼─────────────┼────────────┤
  │ Tool calls   │ 21      │ 61          │ 2.9x fewer │
  ├──────────────┼─────────┼─────────────┼────────────┤
  │ Round trips  │ 5       │ 9           │ 1.8x fewer │
  └──────────────┴─────────┴─────────────┴────────────┘
I had it run a separate analysis using traditional vs. opty and count the actual tool calls and input token counts. My prompt was basically, "do a full analysis of this entire codebase."