top | item 47164246

Every agent framework has the same bug – prompt decay. Here's a fix

1 points| nikolasi | 5 days ago |gist.github.com

5 comments

order

nikolasi|5 days ago

I run 11 agents in 100K+ token sessions. Same problem everyone has: agents follow rules perfectly at first, then drift until the system prompt might as well not exist.

SCAN fix: put questions at the end of each prompt section. Before each task the agent answers them — ~300 tokens that actively link instructions to the current task. Not re-reading the prompt passively, but generating connections to it.

Months of daily use across Claude and Kimi. No benchmarks — can't measure attention weights from outside. But the difference is obvious: without SCAN agents lose rules by mid-session, with SCAN they don't.

No dependencies, any model, open method. Full writeup in the gist.

soletta|5 days ago

I was a bit dubious until I read the gist. I've used a similar technique before to 'tame' GPT-3.5 and keep it following instructions and it worked well (though I had to ask the model to essentially repeat instructions after every 10 or so turns). I'm surprised you see that much drift though; older models were pretty bad with long contexts but I feel like the problem has mostly gone away with Claude Opus 4.6.