(no title)
evalstate | 4 days ago
For the former I'd be interested in learning more about that. From a harness perspective the difference would be the inclusion of the description in the system prompt, and an additional tool call to return the skill. While that's certainly less efficient than adding the context directly I'd be surprised if it degraded task performance significantly.
I tend to be quite focussed with my Skill/Tool usage in general though, inviting them in to context when needed rather than increasing the potential for model confusion.
neya|4 days ago
Sorry, I miquoted the company, it was Vercel, not Cursor.
"A compressed 8KB docs index embedded directly in AGENTS.md achieved a 100% pass rate, while skills maxed out at 79% even with explicit instructions telling the agent to use them. Without those instructions, skills performed no better than having no documentation at all."
https://vercel.com/blog/agents-md-outperforms-skills-in-our-...
evalstate|4 days ago