(no title)
chaboud | 7 days ago
They've actually hit upon something that several of us have evolved to naturally.
LLM's are like unreliable interns with boundless energy. They make silly mistakes, wander into annoying structural traps, and have to be unwound if left to their own devices. It's like the genie that almost pathologically misinterprets your wishes.
So, how do you solve that? Exactly how an experienced lead or software manager does: you have systems write it down before executing, explain things back to you, and ground all of their thinking in the code and documentation, avoiding making assumptions about code after superficial review.
When it was early ChatGPT, this meant function-level thinking and clearly described jobs. When it was Cline it meant cline rules files that forced writing architecture.md files and vibe-code.log histories, demanding grounding in research and code reading.
Maybe nine months ago, another engineer said two things to me, less than a day apart:
- "I don't understand why your clinerules file is so large. You have the LLM jumping through so many hoops and doing so much extra work. It's crazy."
- The next morning: "It's basically like a lottery. I can't get the LLM to generate what I want reliably. I just have to settle for whatever it comes up with and then try again."
These systems have to deal with minimal context, ambiguous guidance, and extreme isolation. Operate with a little empathy for the energetic interns, and they'll uncork levels of output worth fighting for. We're Software Managers now. For some of us, that's working out great.
vishnugupta|7 days ago
For those starting out using Claude Code it gives a structured way to get things done bypassing the time/energy needed to “hit upon something that several of us have evolved to naturally”.
chaboud|7 days ago
Anyone who spends some time with these tools (and doesn't black out from smashing their head against their desk) is going to find substantial benefit in planning with clarity.
It was #6 in Boris's run-down: https://news.ycombinator.com/item?id=46470017
So, yes, I'm glad that people write things out and share. But I'd prefer that they not lead with "hey folks, I have news: we should *slice* our bread!"
fintechie|7 days ago
Personally I have been using a similar flow for almost 3 years now, tailored for my needs. Everybody who uses AI for coding eventually gravitates towards a similar pattern because it works quite well (for all IDEs, CLIs, TUIs)
ffsm8|7 days ago
petesergeant|7 days ago
bambax|7 days ago
The LLM does most of the coding, yet I wouldn't call it "vibe coding" at all.
"Tele coding" would be more appropriate.
mlaretallack|7 days ago
Requirements, design, task list, coding.
marc_g|7 days ago
zozbot234|7 days ago
CodeBit26|7 days ago
bonoboTP|7 days ago
For me what works well is to ask it to write some code upfront to verify its assumptions against actual reality, not just be telling it to review the sources "in detail". It gains much more from real output from the code and clears up wrong assumptions. Do some smaller jobs, write up md files, then plan the big thing, then execute.
unknown|7 days ago
[deleted]
nurettin|7 days ago
jerryharri|7 days ago
0x696C6961|7 days ago
jeffreygoesto|7 days ago
baxtr|7 days ago
locknitpicker|7 days ago
> They've actually hit upon something that several of us have evolved to naturally.
I agree, it looks like the author is talking about spec-driven development with extra time-consuming steps.
Copilot's plan mode also supports iterations out of the box, and draft a plan only after manually reviewing and editing it. I don't know what the blogger was proposing that ventured outside of plan mode's happy path.
user3939382|7 days ago
My architecture is so beautifully strong that even LLMs and human juniors can’t box their way out of it.
kaycey2022|7 days ago
LeafItAlone|7 days ago
So you probably wouldn’t have any clout anyways, like all of the other blog posts.
noisy_boy|7 days ago
xnx|7 days ago
This was a popular analogy years ago, but is out of date in 2026.
Specs and a plan are still good basis, they are of equal or more importance than the ephemeral code implementation.
unknown|7 days ago
[deleted]
BoredPositron|7 days ago
shevy-java|7 days ago
qudat|7 days ago
This isn’t directed specifically at you but the general community of SWEs: we need to stop anthropomorphizing a tool. Code agents are not human capable and scaling pattern matching will never hit that goal. That’s all hype and this is coming from someone who runs the range of daily CC usage. I’m using CC to its fullest capability while also being a good shepherd for my prod codebases.
Pretending code agents are human capable is fueling this koolaide drinking hype craze.
MrDarcy|7 days ago
Pretending otherwise is counter-productive. This ship has already sailed, it is fairly clear the best way to make use of them is to pass input messages to them as if they are an agent of a person in the role.
kobe_bryant|7 days ago
blackarrow36|7 days ago
[deleted]
fy20|7 days ago