top | item 47164401

(no title)

nikolasi | 4 days ago

Glad it resonated! Yeah repeating instructions every N turns was the old approach — SCAN basically does the same thing but with ~20 tokens instead of the full prompt each time.

On drift being "mostly gone" — depends on prompt complexity. With a simple system prompt, sure, modern models hold up fine. But with a large instruction set (mine is ~4000 tokens, 25+ rules across 7 sections) the drift is very much still there, even on Opus. The more rules you have, the more they compete for attention, and the easier it is for specific ones to drop off mid-session.

Also worth noting — this isn't limited to coding agents. Any long-running LLM workflow with complex instructions has the same problem. Customer support bots that forget their tone policy, medical assistants that stop citing sources, content moderation that gets lenient over time. If you have a system prompt with rules and a session longer than 20 minutes — the rules will decay.

discuss

order

soletta|4 days ago

Interesting. I've been coping by being very conservative about how many rules I introduce into the context, but if what you're saying is true, then something like SCAN actually helps the models break past the "total rule count" barrier by giving them something like "cognitive scaffolding". I'm eager to try this out. Thanks again for sharing!

nikolasi|4 days ago

That's a great way to put it — "cognitive scaffolding" is exactly what it is. And yeah, keeping rules minimal is smart, but at some point the project just needs 25 rules and you can't cut them down without losing something important. SCAN lets you have a large instruction set without paying the full attention cost. Let me know how it goes!