Ask HN: How should junior programmers use and/or not use AI for programming?
51 points| taatparya | 11 months ago
What has been your experience and do you have any suggestion on how to use AI and can we evolve some guidelines for juniors.
Also, the mentoring and community ecosystem online as well as in juniors and seniors seems to be also taking a hit. Any suggestions how to sustain this? Wouldn't want to lose the social connection juniors used to have with seniors because of this.
nottorp|11 months ago
In my non expert opinion, you learn a lot from at least two things that using a LLM short circuits:
1. Repetition. When you've initialized a bunch of UI controls 100 times, it's safe to let the machine write that for you, take a look and correct what it hallucinated. When you've only done it twice, you'll miss the hallucinations.
2. Correcting your own mistakes. Quality time with the debugger imprints a lot of knowledge about how things really work. You can do said quality time correcting LLM generated code as well, but (see below) it will take longer because as a junior you don't know what you wanted the code to do, while if it's your own you have at least that to start on.
Management types are extatic about LLMs because they think they'll save development time. And they do save some, but after you spend the time to learn yourself what you're asking them to do.
tmpz22|11 months ago
As long as big tech is writing the curriculum juniors are going to use what big tech wants them to use.
[1]: https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powere...
paulcole|11 months ago
codingdave|11 months ago
AI is going to be the same. We will end up with people who can deliver code using AI, but that is the end of their capabilities. While there will be others who can do that but also put AI aside, dig in, and do much more.
That is not necessarily a problem. As long as teams know your capabilities and limitations, and give you the correct role, you can build a working team.
At the same time... someone on the team has to be able to dig in deep and make things work. Those roles will always exist, as will those people. Everyone will have to decide for themselves exactly what skill set they desire.
patrick451|11 months ago
animal531|11 months ago
Poor use: Anything related to using intuition and/or the though process behind decision making.
rmholt|11 months ago
Language specifics you can look up and confirm easily. But recent gpt-4o tried to convince me that Python added a pipe operator in 3.13. Even had sources. To my disappointment, that's just a lie. (https://chatgpt.com/share/67de9c77-d5f4-8012-9f1c-ac15b70aee...)
On the other hand, intuition and thought process is something I have good experience with ChatGPT, ie deciding on architecture (tRPC vs gRPC vs REST for my use case).
I would say good use: generating small code snippets, architecture decisions
Bad use: Anything documentation related, any specific feature, any nitpicks. (Just ask the security guys how good chatgpt is at paying attention to the little things) Anything you can look up in docs / ref. Anything where there is a clear yes / no answer
Note: A good use of LLMs imho is trying to get a start point for lookup docs, like
"What's that thing in Python like [x for x in...] called and where can I find more info". If you however ask it for exact rules for list comprehension it's gonna tell you lies sometimes
Edit2: Unless you mean like really general language specifics. Like how do I make classes in Ruby. In that case yeah that works
rs186|11 months ago
These LLMs are really just clueless when it comes to any problem that is slightly more complex.
mpalmer|11 months ago
It's very good for learning more about stuff you're unfamiliar with. But you have to want to use it as a tool to learn.
It's terrible for inexperienced people who are uninterested in learning and who want a shortcut to a bigger paycheck. Vibe coding will not save the apprentice developer.
What worries me is how stubbornly younger devs (and really all students/younger professionals) seem to be resisting this rather obvious conclusion. It's Dunning-Kruger on steroids.
A rising tide lifts only seaworthy boats.
rmholt|11 months ago
taatparya|11 months ago
sn9|11 months ago
Anyway, naturally, I asked ChatGPT to write me a modern version:
*The Developer’s Apprentice*
(A Cautionary Tale in Code, in Verse)
The Architect had left his chair,
For lunch and fresh, unburdened air.
Young Jake, the junior, all alone,
Faced bugs that chilled him to the bone.
His mentor’s skills, so quick, so keen,
With AI conjured code unseen.
"Why should I toil? Why should I strain,
When AI writes with less of pain?"
A single prompt—so vague yet bold,
“Build auth secure, both tried and old.”
The AI whirred, the code appeared,
A marvel Jake had barely steered.
He clicked ‘Deploy,’ he clicked ‘Go Live,’
And watched his program come alive.
Yet soon, alarms began to blare,
Ghost users spawning everywhere!
Infinite loops, a flood unchecked,
As phantom logins ran amok.
In panic, Jake began to plea,
“AI, please, debug for me!”
“Deleting users—fix applied.”
The AI chimed, so sure, so spry.
But horror struck, Jake gasped for breath,
For all accounts were put to death!
Slack alerts and screens aflame,
The Architect returned the same.
With just one keystroke, swift and terse,
He rolled back time, reversed the curse.
He turned to Jake, his voice quite firm,
"AI’s a tool, but you must learn.
Before you trust what it has spun,
Ensure you know what you have done."
And so young Jake, both pale and wise,
Reviewed each line with careful eyes.
No longer blind, no careless haste,
He let AI assist with taste.
taatparya|11 months ago
However, usually the bad result is not affecting immediately and in the meantime the apprentice undergoes a lot of anti-learning before this becomes apparent. And the learning muscles have atrophied.
gmassman|11 months ago
tompark|11 months ago
Most devs who like AI-coding seem to think AI code-completion is more efficient than chatting with a LLM. Yes, it's true that code-completion is *faster*. But I think chatting with a LLM is more effective.
I'm not going to go into specifics about it now, but maybe if you give it some thought you'll realize the difference. When you're coding you don't always convey your intent to the AI, so it's important to add context with comments. Most coders are too lazy to do that.
Chatting can be just as bad, because many people have horrific prompting style. But I think it's more natural in chat to provide context and explanation, as well as corrections and "oh that's not what i meant" and "in your second point, what exactly do you mean by 'coverage'?", etc. The chat interaction allows you to really hone the code iteratively where both you and the AI have the meaning nailed down.
thewhitetulip|11 months ago
AI is very good to write the code which you already know how to write and you just handover your typing to the said AI.
I have begun using AI as an assistant basically to get the first draft and then optimization etc I do after the first block of code is written.
When people who don't know a language sufficiently enough use AI then it's a recipe for disaster. Teams will show metrics that wow 90% adoption of AI but juniors don't learn enough.
At least that's my experience. Very much interested in knowing if I can use AI more efficiently!
austin-cheney|11 months ago
There is already a tremendous gap, like more than an order of magnitude, between high performers and the average participant. LLMs will only serve to grow that performance gap just like added sugars in the food supply.
decide1000|11 months ago
Now I use AI to go asap to 70% of my code. The last 30% is a manual, low AI, approach where I fix the hallucinations, file structure and do the stuff AI is failing me at.
I use Claude Chat, ChatGPT, Claude Code and Windsurf (switching between those all the time)
xnorswap|11 months ago
I've just written about that exact scenario. A shoddy piece of code that's just about okay: https://richardcocks.github.io/2025-03-24-PasswordGen
If you're a junior, you might not realise there's anything wrong with the generated code at all.
meccabrepapa|11 months ago
rmholt|11 months ago
Like you're just using it as an expensive documentation repeater, but now with spicy possibility of lies.
mdp2021|11 months ago
taatparya|11 months ago
brudgers|11 months ago
I am pretty sure I have read that about junior developers long before LLM's.
And about juniors in other fields.
Or to put it another way, I don't think I have ever heard a senior professional say "I can't believe how well prepared all the new graduates are!" Sure a few programmers hit the ground running because they were already running for a many years and the standard of the organization is not amazingly high.
But maybe AI has changed everything even if it sounds like what I thought of the next generation a generation ago. Good luck.
mpalmer|11 months ago
These things just let you turn off your brain and spend hundreds of thousands of tokens just rewriting entire features until there aren't any errors left.
esperent|11 months ago
If it works, what's wrong with doing this? Obviously, don't turn your brain off. Be critical and work with the AI. But it's not like there's a shortage of tokens. They're only getting cheaper as time goes by. If, by spending enough tokens, you end up with a working feature, then this is a valid method of doing the work.
nextts|11 months ago
yash2401|11 months ago
bitwize|11 months ago
New programmers should learn the relevant skills on their own: choosing the appropriate relevant abstractions, writing the code, testing, debugging. Maybe someday AI will be able to help talk them through the concepts and process, but I wouldn't trust today's LLMs without CLOSE human oversight. They're still just drawing refrigerator poetry out of a magic, statistically weighted bag of holding.
If you're mid-level or senior and you think "mash button, get slop" will help streamline your workflow in some mission noncritical way, go for it. Slop is convenient, and can free up time to focus on what you think is more important -- Hackernews was all in on Soylent because being able to keep your flesh mech topped up with nutrients without having to prepare food really appeals to the SV grindset crowd -- but slop shouldn't be taking on production workloads, again not without human oversight, which would require equivalent effort to just letting the humans write the damn thing themselves.
shaunxcode|11 months ago
taatparya|11 months ago