(no title)
pajtai | 13 days ago
Also, I bet the quality of these docs vary widely across both human and AI generated ones. Good Agents.md files should have progressive disclosure so only the items required by the task are pulled in (e.g. for DB schema related topics, see such and such a file).
Then there's the choice of pulling things into Agents.md vs skills which the article doesn't explore.
I do feel for the authors, since the article already feels old. The models and tooling around them are changing very quickly.
deaux|12 days ago
> (e.g. for DB schema related topics, see such and such a file).
Rather than doing this, put another AGENTS.md file in a DB-related subfolder. It will be automatically pulled into context when the agent reads any files in the file. This is supported out of the box by any agent worth its salt, including OpenCode and CC.
IMO static instructions referring an LLM to other files are an anti-pattern, at least with current models. This is a flaw of the skills spec, which refers to creating a "references" folder and such. I think initial skills demos from Anthropic also showed this. This doesn't work.
gordonhart|12 days ago
I thought Claude Code didn't support AGENTS.md? At least according to this open issue[0], it's still unsupported and has to be symlinked to CLAUDE.md to be automatically picked up.
[0] https://github.com/anthropics/claude-code/issues/6235
prodigycorp|12 days ago
dpkirchner|13 days ago
deaux|12 days ago
Progressive disclosure is invaluable because it reduces context rot. Every single token in context influences future ones and degrades quality.
I'm also not sure how it reduces the benefit of token caching. They're still going to be cached, just later on.