top | item 46992810

(no title)

dehugger | 17 days ago

I built something similar using an MCP that allows claude to "outsource" development to GLM 4.7 on Cerebras (or a different model, but GLM is what I use). The tool allows Claude to set the system prompt, instructions, specify the output file to write to and crucially allows it to list which additional files (or subsections of files) should be included as context for the prompt.

Ive had great success with it, and it rapidly speeds up development time at fairly minimal cost.

discuss

order

cheema33|17 days ago

Why use MCP instead of an agent skill for something like this when MCP is typically context inefficient?

dehugger|6 days ago

Late reply, but the answer is: 1) there is a fair amount of behind the scenes work going on that I dont want the agent to have access too or know about. Tools make it very easy to have strong control over what can and cannot be done. File system access is built directly into the tool, which makes it much easier to be confident about what it has access too, since the thing that actually has the permissions is the tools code, not the agent. 2) Portability, I can host it from a single spot and serve it to multiple models on different machines easily, which is very desirable for me. 3) I can update the configuration of the tool independent of a skill.

A skill wouldn't be a bad option though, and I highly recommend creating one yourself! The ability to customize our workflows and tools to a high degree is one of the largest strengths of agentic coding.

pertymcpert|17 days ago

MCP is fine if your tool definition is small. If it's something like a sub-agent harness which is used very often, then in fact it's probably more context efficient because the tools are already loaded in context and the model doesn't have to spend a few turns deciding to load the skill, thinking about it and then invoking another tool/script to invoke the subagent.

wahnfrieden|17 days ago

Models haven't been trained enough on using skills yet, so they typically ignore them