(no title)
stitched2gethr | 13 days ago
I have actually found something close to the opposite. I work on a large codebase and I often use the LLM to generate artifacts before performing the task (for complex tasks). I use a prompt to say "go explore this area if the code and write about it". It documents concepts and has pointers to specific code. Then a fresh session can use that without reading the stuff that doesn't matter. It uses more tokens overall, but includes important details that can get totally missed when you just let it go.
embedding-shape|13 days ago
No, even when you restart the context from scratch, which I do for each change really, seeing that same effect.