top | item 47162328

(no title)

matheus-rr | 4 days ago

The context window cost is the real story here. Every MCP tool description gets sent on every request regardless of whether the model needs it. If you have 20 tools loaded, that's potentially thousands of tokens of tool descriptions burned before the model even starts thinking about your actual task.

CLI tools sidestep this completely because the agent only needs to know the tool exists and what flags it takes. The actual output is piped and processed, not dumped wholesale into context. And you get composability for free - pipe to jq, grep, head, whatever.

The auth story is where MCP still wins though. If you need a user to connect their Slack or GitHub through a web UI, you need that OAuth dance somewhere. CLI tools assume you already have credentials configured locally, which is fine for developer tooling but doesn't work for consumer-facing AI products.

For developer workflows specifically, I think the sweet spot is what some people are calling SKILL files - a markdown doc that tells the agent what CLI tools are available and when to use them. Tiny context footprint, full composability, and the agent can read the skill doc once and cache it.

discuss

order

jspdown|4 days ago

On my personal coding agent I've introduced a setup phase inside skills.

I distribute my skills with flake.nix and a lock file. This flake installs the required dependencies and set them up. A frontmatter field defines the name of secrets that need to be passed to the flake.

As it is, it works for me because I trust my skill flakes and skills are static in my system: -I build an agent docker image for the agent in which I inject the skills directory. -Each skill is setup when building the image -Secret are copied before the setup phase and removed right after

All in all, Nix is quite nice for Skills :)