(no title)
Narciss | 22 days ago
Nothing wrong with that, except that as opposed to any other tool that is out there, agentic coding is approached by smart senior engineers that would otherwise spend time reading documentation and understanding a new package/tool/framework before giving conclusions around it with “I spun up Claude code and it’s not working”. Dunno why the same level of diligence isn’t applied to agentic coding as well.
First question that I always have to such engineers is “what model have you tried?” And it always ends up being the non-SOTA models for tasks that are not simple. Have you tried Claude Opus?
Second question: have you tried plan mode?
And then I politely ask them to read some documentation on using these tools, because the simplicity of the chat interface is deceptive.
zazibar|22 days ago
Narciss|22 days ago
Just that many brilliant engineers as themselves test agentic tools without the same level of thorough understanding that they give to other software engineering tools that they test out.
jmull|22 days ago
I always wonder what the purpose of posting these generic, superficial defenses of a certain form of LLM-based coding is?
Narciss|22 days ago
My experience is different in that case, but it certainly depends on the type of technical challenge, the programming language, etc.
Candidates that perform better or worse exist with and without agentic coding tools. I've had positive and negative experience on both fronts, so I'd attribute the OP's experience to the N=1 problem, and perhaps to the model's jagged intelligence.
I work mostly in Typescript, and it's well known that models are particulary well versed in it. I know that other programming languages are less supported because the training data for them is lower, in which case models could be worse with them across the board (or some SOTA models could be better than others)
mjburgess|22 days ago
Narciss|22 days ago