(no title)
iggori | 3 days ago
LLMs don’t execute instructions. They predict tokens. A prompt full of rules isn’t a rulebook to the model; it’s a set of competing statistical pulls. When those pulls conflict, the model doesn’t resolve them by hierarchy. It averages them. That’s the root of drift.
The essay looks at a few examples: OpenAI moving away from instruction‑governed plugins toward structured actions, Devin’s long‑horizon failures that look like “bugs” but are really drift, and Sierra’s constraint‑based architecture that avoids this by making certain behaviors structurally impossible.
The core idea is that instructions compete, but constraints accumulate. If you treat an LLM like software, you get drift. If you treat it like a probability field, you design the boundary layer differently.
https://productics.substack.com/p/instructions-are-suggestio...
No comments yet.