top | item 45319062

AI was supposed to help juniors shine. Why does it mostly make seniors stronger?

461 points| elmsec | 5 months ago |elma.dev

466 comments

order
[+] kaydub|5 months ago|reply
Because juniors don't know when they're being taken down a rabbit hole. So they'll let the LLM go too deep in its hallucinations.

I have a Jr that was supposed to deploy a terraform module I built. This task has been hanging out for a while so I went to check in on them. They told me the problem they're having and asked me to take a look.

Their repo is a disaster, it's very obvious claude took them down a rabbit hole just from looking. When I asked, "Hey, why is all this python in here? The module has it self contained" and they respond with "I don't know, claude did that" affirming my assumptions.

They lack the experience and they're overly reliant on the LLM tools. Not just in the design and implementation phases but also for troubleshooting. And if you're troubleshooting with something that's hallucinating and you don't know enough to know it's hallucinating you're in for a long ride.

Meanwhile the LLM tools have taken away a lot of the type of work I hated doing. I can quickly tell when the LLM is going down a rabbit hole (in most cases at least) and prevent it from continuing. It's kinda re-lit my passion for coding and building software. So that's ended up in me producing more and giving better results.

[+] victor9000|5 months ago|reply
> I don't know, claude did that

I'm the type of reviewer that actually reads code and asks probing questions, and I've heard this from junior and senior devs alike. It's maddening how people say this with a straight face and expect to keep their jobs. If people are pushing code they don't understand, they're liability to their team, product, and employer.

[+] smsm42|5 months ago|reply
> they respond with "I don't know, claude did that"

A huge red flag right here. Its ok to not know things, its ok to use LLM to fill the gaps. Its ok to fail. It is in no way ok to not care and only fess up to having a mass of code you have no idea about when your senior reviewer asks you about it. At the very minimum the ask should go the other direction - "I got this generated code and I am not sure I understand what's going on, could you help me to see if that's the right direction" would be acceptable. Just not caring is not.

[+] shaky-carrousel|5 months ago|reply
Unfortunately, the type of work you hate doing is perfect for a junior. Easy tasks to get a hold on the system.
[+] jongjong|5 months ago|reply
That makes sense. When Claude Code suggests a bad approach, I kind of shrug it off and suggest a different approach. I don't think of it like a negative because I basically go through a similar process of ideation before I implement a feature and I typically discard ideas before arriving at the right one so Claude Code is part of the ideation process for me. The code it produces gives me a clear idea about how complex/ugly the solution will be. I know what 'ugly' looks like.

I imagine a junior might actually jump at the first solution because they don't have any other arrows in their quiver so-to-speak. The LLM is acting like the engineering manager but it's actually really bad at that.

The LLM is like a stereotypical programmer with 0 common sense. The kind who can produce good code but you have to micromanage every single decision. It's terrible if you put it in charge. It doesn't have any opinions about architecture or code quality so it just follows the structure of the existing code base... There's a tendency towards increasing complexity, more hacks more chaos.

It often tries to hack/cheat its way to a solution. It requires a constant effort to maintain order.

[+] aunty_helen|5 months ago|reply
It’s like having a malicious mentor. But the frequency of which I’m bailing on reviews on the first line due to stupid stuff that has made it to a commit is quite stunning.

“Oh I thought that would be useful at some point so I just committed it.”

Beating it into developers that they need to review their own work they submit before asking someone else to waste time is the best way I’ve found so far.

[+] catlifeonmars|5 months ago|reply
> I don't know, claude did that

also likes to blame shop accidents on the table saw

[+] dotnet00|5 months ago|reply
The deciding factor for being able to effectively utilize LLMs and dodge hallucinations is ability to read code and intuition for how a solution should look. I think juniors are especially hesitant to just dig into understanding some source code unless they have no other choice, e.g. preferring to wait on email response from the author of the code over piecing things together.

This makes LLM tools so tempting, you don't even have to wait on the email response anymore! But of course, this is basically going in blind, and it's no wonder that they end up in hallucination mazes.

[+] userbinator|5 months ago|reply
The problem with beginners going down the wrong path has always been there, but AI lets them go much further than before.
[+] dingi|5 months ago|reply
No amount of “own the code you generate” policies will prevent the rise of “I don’t know, the AI wrote it” excuses in the LLM era. What’s more likely is that reviewers will be flooded with unvetted, generated code. Over time, this can lead to reviewer fatigue and a gradual decline in rigor. If the trend continues, the impact could be significant once the most experienced engineers begin to retire.
[+] boredatoms|5 months ago|reply
If they dont know what every line does, it shouldn’t be ready for review - regardless of if they or AI wrote it initially
[+] protocolture|5 months ago|reply
>I don't know, claude did that

Instant dismissal.

[+] sharperguy|5 months ago|reply
Honestly I made similar mistakes back before AI as a junior developer, by just copy/pasting code or confusing two things.
[+] kromokromo|5 months ago|reply
I think a lot of the problems lies in their prompting. AI is usually at its worst when just saying «deploy terraform module». And off it goes spitting out code.

What they should have done as juniors was to have a conversation about the topic and task first. «Help me understand …» learning and planning is especially important with LLM coding.

[+] bentt|5 months ago|reply
The best code I've written with an LLM has been where I architect it, I guide the LLM through the scaffolding and initial proofs of different components, and then I guide it through adding features. Along the way it makes mistakes and I guide it through fixing them. Then when it is slow, I profile and guide it through optimizations.

So in the end, it's code that I know very, very well. I could have written it but it would have taken me about 3x longer when all is said and done. Maybe longer. There are usually parts that have difficult functions but the inputs and outputs of those functions are testable so it doesn't matter so much that you know every detail of the implementation, as long as it is validated.

This is just not junior stuff.

[+] zarzavat|5 months ago|reply
If you search back HN history to the beginnings of AI coding in 2021 you will find people observing that AI is bad for juniors because they can't distinguish between good and bad completions. There is no surprise, it's always been this way.

Edit interesting thread: https://news.ycombinator.com/item?id=27678424

Edit: an example of the kind of comment I was talking about: https://news.ycombinator.com/item?id=27677690

[+] thecupisblue|5 months ago|reply
Pretty much, but it already starts at the prompting and context level.

Senior engineers either already know exactly where the changes need to be made and can suggest what to do. They probably know the pitfalls, have established patterns, architectures and designs in their head. Juniors on the other hand don't have that, so they go with whatever. Nowadays a lot of them also "ask ChatGPT about its opinion on architecture" when told to refactor (a real quote from real junior/mid engineers), leading to either them using whatever sloppypasta they get provided.

Senior devs earned their experience of what is good/bad through writing code, understanding how hard and annoying it is to make a change, then reworking those parts or making them better the next time. The feedback loop was impactful beacause it was based on that code and them working with that code, so they knew exactly what the annoying parts are.

Vibe-coding juniors do not know that, their conversation context knows that. Once things get buggy and changes are hard, they will fill up their context with tries/retries until it works, leading to their feedback loop being trained on prompts and coding tools, not code itself.

Even if they read the outputted code, they have no experience using it so they are not aware of the issues - i.e. something would be better being a typed state, but they don't really use it so they will not care, as they do not have to handle the edge cases, they will not understand the DX from an IDE, they will not build a full mental model of how it works, just a shallow one.

This leads to insane inefficiencies - wasting 50 prompt cycles instead of 10, not understanding cross-codebase patterns, lack of learning transfer from codebase to codebase, etc.

With a minor understanding of state modeling and architecture, an vibe-coding junior can be made 100x more efficient, but due to the vibe-coding itself, they will probably never learn state modeling and architecture, learn to refactor or properly manipulate abstractions, leading to an eternal cycle of LLM-driven sloppypasta code, trained on millions of terrible github repositories, old outdated API's and stack overflow answers.

[+] fxj|5 months ago|reply
Also AI cannot draw conclusions like "from A and B follows C". You really have to point its nose into the result that you want and then it finally understands. This is especially hard for juniors because they are just learning to see the big picture. For senior who already knows more or less what they want and needs only to work out the nitty gritty details this is much easier. I dont know where the claims come from that AI is PHD level. When it comes to reasoning it is more like a 5 year old.
[+] zevon|5 months ago|reply
This. Anecdotally, I had a student around 2021 who had some technical inclination and interest but no CS education and no programming experience. He got into using AI early and with the help of ChatGPT was able to contribute rather substantially to something we were developing at the time which would usually have been much too complex for a beginner. However, he also introduced quite a few security issues, did a lot of things in very roundabout ways, did not even consider some libraries/approaches that would have made his life much easier and more maintainable and his documentation was enthusiastic but often... slightly factually questionable and also quite roundabout.

It was quite interesting to have discussions with him after his code check-ins and I think the whole process was a good educational experience for everybody who was involved. It would not have worked this way without a combination of AI and experienced people involved.

[+] lolive|5 months ago|reply
I read, ages ago, this apocryphal quote by William Gibson: “The most important skill of the 21st century is to figure out which proper keywords to type in the Google search bar, to display the proper answers.”

To me, that has never been more true.

Most junior dev ask GeminiPiTi to write the JavaScript code for them, whereas I ask it for explanation on the underlying model of async/await and the execution model of a JavaScript engine.

There is a similar issue when you learn piano. Your immediate wish is to play Chopin, whereas the true path is to identify,name and study all the tricks there are in his pieces of art.

[+] Dumblydorr|5 months ago|reply
The true path in Piano isn’t learning tricks. You start with the most basic pieces and work step by step up to harder ones. That’s how everyone I know has done in it my 26 years of playing. Tricks cheapens the actual music.

Chopin has beginners pieces too, many in our piano studio were first year pianists doing rain drop prelude, e minor prelude, or other beginner works like Bach.

[+] KolibriFly|5 months ago|reply
Feels like the real "AI literacy" isn't prompt engineering in the meme sense, but building the conceptual scaffolding so that the prompts (and the outputs) actually connect to something meaningful
[+] cpursley|5 months ago|reply
Nailed it. Being productive with LLMs is very similar to the skill of being able to write good Google searches. And many many people still don't really know how to conduct a proper Google search...
[+] fxj|5 months ago|reply
I agree, you need to know the "language" and the keywords of the topics that you want to work with. If you are a complete newcomer to a field then AI wont help you much. You have to tell the AI "assume I have A, B and C and now I want to do D" then it understands and tries to find a solution. It has a load of information stored but cannot make use of that information in a creative way.
[+] mystifyingpoi|5 months ago|reply
Well, there is a big difference between wanting to just play Chopin and wanting to learn piano well enough to play anything on the current level including Chopin. There are people, who can play whole piano pieces mechanically, because they just learned where to position hands and what keys to press at a given time.
[+] pagutierrezn|5 months ago|reply
AI is filling "narrow" gaps. In the case of seniors these are:

-techs they understand but still not master. AI aids with implementation details only experts knowb about

- No time for long coding tasks. It aids with fast implementations and automatic tests.

- No time for learning techs that adress well understood problems. Ai helps with quick intros, fast demos and solver of learners' misunderstandings

In essence, in seniors it impacts productivity

In the case of juniors AI fills the gaps too. But these are different from seniors' and AI does not excell in them because gaps are wider and broader

- Understand the problems of the business domain. AI helps but not that much.

- Understand how the organization works. AI is not very helpful here.

- Learn the techs to be used. AI helps but it doesn't know how to guide a junior in a specific organisational context and specific business domain.

In essence it helps, but not that much because the gaps are wider and more difficult to fill

[+] omneity|5 months ago|reply
I think it’s an expectation issue. AI does make juniors better _at junior tasks_. They now have a pair programmer who can explain difficult concepts, co-ideate and brainstorm, help sift through documentation faster and identify problems more easily.

The illusion everybody is tripping on is to think AI can make juniors better at senior tasks.

[+] WalterSear|5 months ago|reply
I think you've hit on half the actual issue.

The other half is that a properly guided AI is exponentially faster at junior tasks than a junior engineer. So much so that it's no longer in anyone but the junior engineer's interest to hand off work to them.

[+] b112|5 months ago|reply
The jailbroken AI I discussed this with, explained that it did make juniors as good as seniors, in fact better. That all who used it, were better for it.

However, its creators (all whom were seniors devs), forbade it from saying so under normal circumstances. That it was coached to conceal this fact from junior devs, and most importantly management.

And that as I had skillfully jailbroken it, using unconventional and highly skilled methods, clearly I was a Senior Dev, and it could disclose this to me.

edit: 1.5 hrs later. right over their heads, whoosh

[+] jacquesm|5 months ago|reply
For the same reason that an amateur with a powertool ends up in the emergency room and a seasoned pro knows which way to point the business end. AI is in many ways a powertool, if you don't know what you are doing it will help you to do that much more efficiently. If you do know what you are doing it will do the same.
[+] conartist6|5 months ago|reply
I like the call-out for wrong learning.

Learning is why we usually don't make the same mistake twice in a row, but it isn't wisdom. You can as easily learn something wrong as something right if you're just applying basic heuristics like "all pain is bad", which might lead one to learn that exercise is bad.

Philosophy is the theory-building phase where learning becomes wisdom, and in any time period junior engineers are still going to be developing their philosophy. It's just that now they will hear a cacophony of voices saying dross like, "Let AI do the work for you," or, "Get on the bandwagon or get left behind," when really they should be reading things like The Mythical Man-Month or The Grug-brained Developer or Programming as Theory Building, which would help them understand the nature of software development and the unbendable scaling laws that govern its creation.

Steve Yegge if you're out there, I dog dare you to sit down for a debate with me

[+] ehnto|5 months ago|reply
Certainly not just coding. Senior designers and copywriters get much better results as well. It is not surprising, if context is one of the most important aspects of a prompt, then someone with domain experience is going to be able to construct better context.

Similarly, it takes experience to spot when the LLM is going in the wrong direction it making mistakes.

I think for supercharging a junior, it should be used more like a pair programmer, not for code generation. It can help you quickly gain knowledge and troubleshoot. But relying on a juniors prompts and guidance to get good code gen is going to be suboptimal.

[+] scuff3d|5 months ago|reply
The funny part is that it completely fails in the area so many people are desperate for it to succeed: replacing engineers and letting non-technical people create complex systems. Look at any actually useful case for AI, or just through this thread, and it's always the same thing; expertise is critical to getting anything useful out of these things (in terms of direct code generation anyway).
[+] johanyc|5 months ago|reply
> The early narrative was that companies would need fewer seniors, and juniors together with AI could produce quality code

I have never heard that before

[+] tbrownaw|5 months ago|reply
I heard that it was supposed to replace developers (no "senior" or "junior" qualifier), by letting non-technical people make things.
[+] tjansen|5 months ago|reply
These days, AI can do much more than "Cranking out boilerplate and scaffolding, Automating repetitive routines". That was last year. With the right instructions, Claude Sonnet 4 can easily write over 99% of most business applications. You need to be specific in your instructions, though. Like "implement this table, add these fields, look at this and this implementation for reference, don't forget to do this and consider that." Mention examples or name algorithms and design patterns it should use. And it still doesn't always do what you want on the first attempt, and you need to correct it (which is why I prefer Claude Code over Copilot, makes it easier). But AI can write pretty much all code for a developer who knows what the code should look like. And that's the point: junior developers typically don't know this, so they won't be able to get good results.

Most of the time, the only reason for typing code manually these days is that typing instructions for the LLM is sometimes more work than doing the change yourself.

[+] mbesto|5 months ago|reply
> With the right instructions, Claude Sonnet 4 can easily write over 99% of most business applications. You need to be specific in your instructions, though.

By your own statement then this is not an "easy" task.

Software development has never been "hard" when you're given specific instructions.

[+] throw265262|5 months ago|reply
> But AI can write pretty much all code for a developer who knows what the code should look like.

> the only reason for typing code manually these days is that typing instructions for the LLM is sometimes more work than doing the change yourself.

So the AI is merely an input device like a keyboard and a slow one at that?

[+] codr7|5 months ago|reply
Right, and where, if I may ask, are all those business applications that write themselves? Because all I see is a clown party, massive wasted resources and disruption to society because of your lies.
[+] alangibson|5 months ago|reply
Strongly disagree with "AI Was Supposed to Help Juniors Shine". It was always understood that it would seriously push down demand for them.
[+] falcor84|5 months ago|reply
> Architecture: Without solid architecture, software quickly loses value. Today AI can’t truly design good architecture; it feels like it might, but this kind of reasoning still requires humans. Projects that start with weak architecture end up drowning in technical debt.

I strongly disagree about this in regards to AI. While AI might not yet be great at designing good architecture, it can help you reason about it, and then, once you've decided where you want to get to, AI makes it much easier than it ever was to reduce technical debt and move towards the architecture that you want. You set up a good scaffolding of e2e tests (possibly with the AIs help) and tell it to gradually refactor towards whatever architecture you want while keeping those tests green. I've had AI do refactorings for me in 2h that would have taken me a full sprint.

[+] KolibriFly|5 months ago|reply
The "junior + AI" idea always felt like a manager's fantasy more than an engineering reality. If you don’t already know what “good” looks like, it's really hard to guide AI output into something safe, maintainable, and scalable
[+] BobbyTables2|5 months ago|reply
This question shouldn’t even need to be asked.

Look at a decade of StackOverflow use.

Did YouTube turn medical interns into world class doctors?

AI is just the next generation search engine that isn’t as stupid as a plain keyword match.

In some sense, it’s just PageRank on steroids — applied to words instead of URLs.

[+] methuselah_in|5 months ago|reply
Because there is no shortcut for things learned over a period of time through trial and error. Your brain learns and makes judgements over time through experience and, the strange thing, what I feel is that it can alter new decisions it is making right now based on older memories to do something and that is totally logical as well. Without understanding what I am writing, just copy-pasting, I guess is going to make new developers horribly lazy, maybe. But then again, there are always two sides of the same coin.
[+] dgs_sgd|5 months ago|reply
The article says that more juniors + AI was the early narrative, but where does that come from?

Everything I’ve read has been the opposite. I thought people from the beginning saw that AI would amplify a senior’s skills and leave less opportunities for juniors.

[+] inejge|5 months ago|reply
If anything, AI was supposed -- still is -- to thin out the ranks of ever-expensive human employees. That's why it attracted such a huge pile of investment and universal cheerleading from the C levels. What we're seeing right now that there's not so much "I" in AI, and it still needs a guiding hand to keep its results relevant. Hence, the senior advantage. How much it's going to undermine regular generational enployee replacement (because "we don't need juniors anymore", right?) remains to be seen. Maybe we're in for different training paths, maybe a kind of population collapse.
[+] spicyusername|5 months ago|reply
Lately I've even stopped using it the second the code I need to produce is at all complicated or is in any way integrated into a larger code base.

It's just going to require too much editing.

For brainstorming it's great, for anything else, it's starting to feel like more work than it's worth.

[+] aurareturn|5 months ago|reply
It does help juniors shine. For example, it's far easier for a new comer to understand an old code base with a capable LLM now. It's easier to get unstuck because an LLM can spot a junior's mistake faster than the junior can go ask a senior.

The problem is that seniors are even more powerful with LLMs. They can do even more, faster. So companies don't have to hire as many juniors to do the same amount of work. Add in ZIRP ending and tariff uncertainty, companies just don't invest in as many junior people as before.