I can control it well for an hour or two using .md files, and breaking everything down into small tasks, but then out of nowhere it burns everything down, goes into 10x technical debt and replaces everything with placeholders
You are working against LLM attention. A LLM looks at a conversation and focuses on its attention points. Usually the start and end. Your previous work falls into the out of attention space and gets nuked.
If your asking how to have everything attention we currently can't.
My code (that ChatGPT writes for me is from 500 to 1000 lines). Every 5-7 versions, it starts messing things up.
I keep the working versions on a Word file on a Landscape, A3, 3 columns (version number, comment/changelog, the_code)(yes, cheap, scalable, easy).
So, every 5-7 versions, I start a new chat. I ask ChatGPT to read/write a summary/description of the code, and then I proceed to ask it for new changes/enhancements.
I'm using an editor called Zed and it has an option to create a "new thread from summary" It also shows at the top of the screen how many tokens I have used out of total available so with the combination of that I think it is best to create a new "chat" periodically with a summary.
I use .md files to keep Cursor on track, the flow I use is something like...
Define a feature in detail (using trascription) -> Get o3 or Gemini 2.5 pro to break it down into very small testable tasks. -> review this -> then paste into a tasks.md file -> write and architecture.md file or similar for any additional context needed. -> then prompt Cursor to work through tasks.md step by step.
This keeps it on track, with the whole feature defined from the outset.
But eventually... it will try to ignore the dockerfile and setup up locally, create multiple .env files, write code with placeholders, ignore a files it's just created and written...
It's impossible to get it back on track - it gets into a debug loop of making things worse rather than getting back on track.
Fin_Code|9 months ago
If your asking how to have everything attention we currently can't.
campervans|9 months ago
So you're saying I need some adderral.ai
HenryBemis|9 months ago
I keep the working versions on a Word file on a Landscape, A3, 3 columns (version number, comment/changelog, the_code)(yes, cheap, scalable, easy).
So, every 5-7 versions, I start a new chat. I ask ChatGPT to read/write a summary/description of the code, and then I proceed to ask it for new changes/enhancements.
matternous|9 months ago
deverman|8 months ago
campervans|8 months ago
gravez|9 months ago
campervans|9 months ago
Define a feature in detail (using trascription) -> Get o3 or Gemini 2.5 pro to break it down into very small testable tasks. -> review this -> then paste into a tasks.md file -> write and architecture.md file or similar for any additional context needed. -> then prompt Cursor to work through tasks.md step by step.
This keeps it on track, with the whole feature defined from the outset.
But eventually... it will try to ignore the dockerfile and setup up locally, create multiple .env files, write code with placeholders, ignore a files it's just created and written...
It's impossible to get it back on track - it gets into a debug loop of making things worse rather than getting back on track.
v5o|9 months ago
[deleted]
yb6677|9 months ago
[deleted]
campervans|9 months ago