top | item 47142505

(no title)

Frannky | 5 days ago

I think unless you're doing simple tasks, skills are unreliable. For better reliability, I have the agent trigger APIs that handles the complex logic (and its own LLM calls) internally. Has anyone found a solid strategy for making complex 'skills' more dependable?

discuss

order

selridge|5 days ago

In my experience, all text “instruction” to the agent should be taken on a prayer. If you write compact agent guidance that is not contradictory and is local and useful to your project, the agent will follow it most of the time. There is nothing that you can write that will force the agent to follow it all of the time.

If one can accept failure to follow instructions, then the world is open. That condition does not really comport with how we think about machines. Nevertheless, it is the case.

Right now, a productive split is to place things that you need to happen into tooling and harnessing, and place things that would be nice for the agent to conceptualize into skills.

Frannky|5 days ago

Yeah, that's my experience too

plufz|5 days ago

My only strategy is what used to be called slash-commands but are also skills now, I.e I call them explicitly. I think that actually works quite well and you can allow specific tools and tell it to use specific hooks for security of validation in the frontmatter properties.

chickensong|5 days ago

Is it that the skills aren't being triggered reliably, or that they get triggered but the skill itself is complex and doesn't work as expected?

Frannky|5 days ago

both

Rebelgecko|5 days ago

Having the skill be "call this script with these args" seems to reduce the amount of stuff that goes wrong

triage8004|4 days ago

I found interrupting and insisting on the skill use the easiest way...got to be better ways like this