top | item 43444058

Ask HN: How should junior programmers use and/or not use AI for programming?

51 points| taatparya | 11 months ago

I am observing at my company that junior programmers, who have been allowed to use AI for helping with their coding seem to be losing their coding skills and critical skills. However, the seniors have become slightly more productive.

What has been your experience and do you have any suggestion on how to use AI and can we evolve some guidelines for juniors.

Also, the mentoring and community ecosystem online as well as in juniors and seniors seems to be also taking a hit. Any suggestions how to sustain this? Wouldn't want to lose the social connection juniors used to have with seniors because of this.

44 comments

order

nottorp|11 months ago

What do the experts in learning say? Preferably some that don't work for the LLM peddlers.

In my non expert opinion, you learn a lot from at least two things that using a LLM short circuits:

1. Repetition. When you've initialized a bunch of UI controls 100 times, it's safe to let the machine write that for you, take a look and correct what it hallucinated. When you've only done it twice, you'll miss the hallucinations.

2. Correcting your own mistakes. Quality time with the debugger imprints a lot of knowledge about how things really work. You can do said quality time correcting LLM generated code as well, but (see below) it will take longer because as a junior you don't know what you wanted the code to do, while if it's your own you have at least that to start on.

Management types are extatic about LLMs because they think they'll save development time. And they do save some, but after you spend the time to learn yourself what you're asking them to do.

tmpz22|11 months ago

The California State University system just announced a huge AI initiative [1] to become "the First and Largest AI-Empowered University System" - they're announcement post includes testimonials from: AWS, Nvidia, OpenAI, Microsoft, LinkedIn, Instructure, Google, and Adobe.

As long as big tech is writing the curriculum juniors are going to use what big tech wants them to use.

[1]: https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powere...

paulcole|11 months ago

Who cares what the experts say? Look for yourself and see what applies in your specific situation.

codingdave|11 months ago

Whenever new layers of abstraction enter the industry, it has allowed coders to distance themselves from various pieces of the puzzle. Back in ye olde days, every coder also knew how to set up the infrastructure to run it, often to the point of also maintaining the hardware. Then along came the cloud. Nowadays, some people still know the whole infrastructure, but just as many only know how to run their local dev and git push. Actually making it run somewhere is a black box to them, and very few people handle everything from the UI all the way down the stack to the bare metal.

AI is going to be the same. We will end up with people who can deliver code using AI, but that is the end of their capabilities. While there will be others who can do that but also put AI aside, dig in, and do much more.

That is not necessarily a problem. As long as teams know your capabilities and limitations, and give you the correct role, you can build a working team.

At the same time... someone on the team has to be able to dig in deep and make things work. Those roles will always exist, as will those people. Everyone will have to decide for themselves exactly what skill set they desire.

patrick451|11 months ago

Previous layers of abstraction were largely reliable and deterministic. Compilers don't just randomly generate assembly. Yes, all abstractions leak, but most of them just drip, while whatever "abstraction" AI provides leaks like a sieve. It would be like if CDK just created or deleted random resources 50% of the time.

animal531|11 months ago

Good use: Asking it to help with language specifics, all the nit-picky stuff. Include other basics like well known math and/or small methods.

Poor use: Anything related to using intuition and/or the though process behind decision making.

rmholt|11 months ago

I would argue the exact opposite.

Language specifics you can look up and confirm easily. But recent gpt-4o tried to convince me that Python added a pipe operator in 3.13. Even had sources. To my disappointment, that's just a lie. (https://chatgpt.com/share/67de9c77-d5f4-8012-9f1c-ac15b70aee...)

On the other hand, intuition and thought process is something I have good experience with ChatGPT, ie deciding on architecture (tRPC vs gRPC vs REST for my use case).

I would say good use: generating small code snippets, architecture decisions

Bad use: Anything documentation related, any specific feature, any nitpicks. (Just ask the security guys how good chatgpt is at paying attention to the little things) Anything you can look up in docs / ref. Anything where there is a clear yes / no answer

Note: A good use of LLMs imho is trying to get a start point for lookup docs, like

"What's that thing in Python like [x for x in...] called and where can I find more info". If you however ask it for exact rules for list comprehension it's gonna tell you lies sometimes

Edit2: Unless you mean like really general language specifics. Like how do I make classes in Ruby. In that case yeah that works

rs186|11 months ago

No, absolutely not. Well, maybe JavaScript, but they are horrible at languages like C++. Just yesterday, I asked multiple LLMs (GPT, Claude, Gemini) why my code does not compile, with the error message, and asked for a solution. None provided a correct answer or a correct fix, after multiple attempts. I ended up figuring out the real error myself using my own knowledge and fixed it myself.

These LLMs are really just clueless when it comes to any problem that is slightly more complex.

mpalmer|11 months ago

AI is fantastic for doing stuff you are already qualified to do, but faster. It's great for prompting you with what to do next, and offering different ways to think about the problem in front of you. But you have to be able to describe the problem.

It's very good for learning more about stuff you're unfamiliar with. But you have to want to use it as a tool to learn.

It's terrible for inexperienced people who are uninterested in learning and who want a shortcut to a bigger paycheck. Vibe coding will not save the apprentice developer.

What worries me is how stubbornly younger devs (and really all students/younger professionals) seem to be resisting this rather obvious conclusion. It's Dunning-Kruger on steroids.

A rising tide lifts only seaworthy boats.

rmholt|11 months ago

Perhaps there's something wrong with traditional docs, how we write them, when junior devs are so insistent on using AI?

taatparya|11 months ago

Am also concerned about mentoring and community ecosystems languishing and the social connection wilting.

sn9|11 months ago

It's like y'all never heard of the sorceror's apprentice: https://www.gygatext.ch/english_translations_zurich_sorcerer...

Anyway, naturally, I asked ChatGPT to write me a modern version:

*The Developer’s Apprentice*

(A Cautionary Tale in Code, in Verse)

The Architect had left his chair,

For lunch and fresh, unburdened air.

Young Jake, the junior, all alone,

Faced bugs that chilled him to the bone.

His mentor’s skills, so quick, so keen,

With AI conjured code unseen.

"Why should I toil? Why should I strain,

When AI writes with less of pain?"

A single prompt—so vague yet bold,

“Build auth secure, both tried and old.”

The AI whirred, the code appeared,

A marvel Jake had barely steered.

He clicked ‘Deploy,’ he clicked ‘Go Live,’

And watched his program come alive.

Yet soon, alarms began to blare,

Ghost users spawning everywhere!

Infinite loops, a flood unchecked,

As phantom logins ran amok.

In panic, Jake began to plea,

“AI, please, debug for me!”

“Deleting users—fix applied.”

The AI chimed, so sure, so spry.

But horror struck, Jake gasped for breath,

For all accounts were put to death!

Slack alerts and screens aflame,

The Architect returned the same.

With just one keystroke, swift and terse,

He rolled back time, reversed the curse.

He turned to Jake, his voice quite firm,

"AI’s a tool, but you must learn.

Before you trust what it has spun,

Ensure you know what you have done."

And so young Jake, both pale and wise,

Reviewed each line with careful eyes.

No longer blind, no careless haste,

He let AI assist with taste.

taatparya|11 months ago

The spirit of the original is intact.

However, usually the bad result is not affecting immediately and in the meantime the apprentice undergoes a lot of anti-learning before this becomes apparent. And the learning muscles have atrophied.

gmassman|11 months ago

Nice work, both to the prompter and the promptee!

tompark|11 months ago

The temptation to use IDE-based AI is too great, so the industry is just going to to it, and we're just going to have to live with it. It's unfortunate.

Most devs who like AI-coding seem to think AI code-completion is more efficient than chatting with a LLM. Yes, it's true that code-completion is *faster*. But I think chatting with a LLM is more effective.

I'm not going to go into specifics about it now, but maybe if you give it some thought you'll realize the difference. When you're coding you don't always convey your intent to the AI, so it's important to add context with comments. Most coders are too lazy to do that.

Chatting can be just as bad, because many people have horrific prompting style. But I think it's more natural in chat to provide context and explanation, as well as corrections and "oh that's not what i meant" and "in your second point, what exactly do you mean by 'coverage'?", etc. The chat interaction allows you to really hone the code iteratively where both you and the AI have the meaning nailed down.

thewhitetulip|11 months ago

Almost certainly. I have seen cases where dev used AI to write a code which testers tested "using AI" and analysts used AI to parse the teat cases.

AI is very good to write the code which you already know how to write and you just handover your typing to the said AI.

I have begun using AI as an assistant basically to get the first draft and then optimization etc I do after the first block of code is written.

When people who don't know a language sufficiently enough use AI then it's a recipe for disaster. Teams will show metrics that wow 90% adoption of AI but juniors don't learn enough.

At least that's my experience. Very much interested in knowing if I can use AI more efficiently!

austin-cheney|11 months ago

LLMs will ultimately prove to be for programming what sugar is for the food industry. It’s a short circuit that appeals to some people more than others resulting in addiction, poor performance, and lost development/growth.

There is already a tremendous gap, like more than an order of magnitude, between high performers and the average participant. LLMs will only serve to grow that performance gap just like added sugars in the food supply.

decide1000|11 months ago

My take on the subject has shifted the last months. I had issues with AI's output, the mistakes, hallucinations.

Now I use AI to go asap to 70% of my code. The last 30% is a manual, low AI, approach where I fix the hallucinations, file structure and do the stuff AI is failing me at.

I use Claude Chat, ChatGPT, Claude Code and Windsurf (switching between those all the time)

xnorswap|11 months ago

What about the cases where the AI doesn't hallucinate, produces working code, but the code is just about good enough?

I've just written about that exact scenario. A shoddy piece of code that's just about okay: https://richardcocks.github.io/2025-03-24-PasswordGen

If you're a junior, you might not realise there's anything wrong with the generated code at all.

meccabrepapa|11 months ago

So which event make you change your approach

rmholt|11 months ago

I always die inside a little when junior dev at my company uses Windsurf to tell him how a specific pandas function works.

Like you're just using it as an expensive documentation repeater, but now with spicy possibility of lies.

mdp2021|11 months ago

I possibly disagree with the principle, if I understood the point correctly: I have always felt reading the `man` pages an inefficient process, searching for literals ('/') often inefficace, and hoped one could finally request for `man` content (and similar) in (more) natural language.

taatparya|11 months ago

Is there some tool or way of guiding junior devs to make the best use of AI, perhaps monitoring their prompts? Or somehow to intervene and coax them into using better prompts according to their level and experience. Perhaps make AI respond differently to people with different profiles.

brudgers|11 months ago

seem to be losing their coding skills and critical skills

I am pretty sure I have read that about junior developers long before LLM's.

And about juniors in other fields.

Or to put it another way, I don't think I have ever heard a senior professional say "I can't believe how well prepared all the new graduates are!" Sure a few programmers hit the ground running because they were already running for a many years and the standard of the organization is not amazingly high.

But maybe AI has changed everything even if it sounds like what I thought of the next generation a generation ago. Good luck.

mpalmer|11 months ago

I wouldn't let junior devs anywhere near "agentic" tools like Aider / the newer Copilot stuff.

These things just let you turn off your brain and spend hundreds of thousands of tokens just rewriting entire features until there aren't any errors left.

esperent|11 months ago

> spend hundreds of thousands of tokens just rewriting entire features until there aren't any errors left

If it works, what's wrong with doing this? Obviously, don't turn your brain off. Be critical and work with the AI. But it's not like there's a shortage of tokens. They're only getting cheaper as time goes by. If, by spending enough tokens, you end up with a working feature, then this is a valid method of doing the work.

nextts|11 months ago

An AI that comments on the PR is useful but I'd leave it at that. They should use their brain or they'll be vibe coding their whole life. Even seniors should be minimally using AI to write code.

yash2401|11 months ago

Ask them to explain their approach and whatever you feel like they need better understanding on a particular topic, please share the resource with them. I have tried this and I am getting good feedback.

bitwize|11 months ago

AI is training wheels for programming. Hint: Training wheels are actually counterproductive for learning the dynamic balance skills it takes to ride a bike.

New programmers should learn the relevant skills on their own: choosing the appropriate relevant abstractions, writing the code, testing, debugging. Maybe someday AI will be able to help talk them through the concepts and process, but I wouldn't trust today's LLMs without CLOSE human oversight. They're still just drawing refrigerator poetry out of a magic, statistically weighted bag of holding.

If you're mid-level or senior and you think "mash button, get slop" will help streamline your workflow in some mission noncritical way, go for it. Slop is convenient, and can free up time to focus on what you think is more important -- Hackernews was all in on Soylent because being able to keep your flesh mech topped up with nutrients without having to prepare food really appeals to the SV grindset crowd -- but slop shouldn't be taking on production workloads, again not without human oversight, which would require equivalent effort to just letting the humans write the damn thing themselves.

shaunxcode|11 months ago

not : the correct answer

taatparya|11 months ago

If you could elaborate a bit with why do you feel that way, it would help me glean some insight after comparing with the experience at my workplace. We face double challenge because English is not our first language.