top | item 43446695

Revenge of the Junior Developer

68 points| ado__dev | 11 months ago |sourcegraph.com

89 comments

order

nickysielicki|11 months ago

I’m usually one of the people complaining about hype cycles, and it’s usually been correct to be pessimistic about them.

But in this particular case I have to think a lot of people just haven’t tried it in its best form. No, not a local model on your MacBook. No, not the web interface on the free plan. Go lay down $300 into API credits, spend a weekend (or maybe two) fully setting up aider, really give it a shot. It’s ultimately a pretty small amount of money when it comes to figuring out whether the people who are telling you there’s an existential risk to your livelihood on the horizon are onto something, don’t you think?

myko|11 months ago

I find myself much the opposite - I don't usually complain about hype cycles, thinking we should wait and see before reserving judgement. In this case I feel like we've seen enough to know LLMs are not capable of performing anyone's job.

LoganDark|11 months ago

$300? That much sounds like it would last a year. You don't need to spend anywhere near $300 just to try things out

Timber-6539|11 months ago

How much does the Deepseek's equivalent cost?

skydhash|11 months ago

> It’s ultimately a pretty small amount of money when it comes to figuring out whether the people who are telling you there’s an existential risk to your livelihood on the horizon are onto something, don’t you think?

Nope. I'd rather buy some books or a Jetbrains subscription.

djha-skin|11 months ago

> I have bad news: Code completions were very popular a year ago, a time that now feels like a distant prequel. But they are now the AI equivalent of “dead man walking.”

I disagree. I view this as a "machine guns versus heat sinking missiles from the 70s" dichotomy. Sure, using missiles is faster. However, sometimes you're too close for missiles. Also, machine gun rounds are way cheaper than missiles. However, when they first came out, missiles were viewed as the future. For a while, fighter jets were made without machine guns, but they added them back later because they decided they needed both.

Sometimes I find I want to drill down and edit what Claude generated. In that case, copilot is still really nice.

With regard to ai assisted coding: the more you know what you're doing, the more you know the code base, the better result you'll get. To me it feels like a rototiller or some other power tool. It plows soil way faster than you can and is self propelled, but it isn't self directed. Using it still requires planning and it's expensive to run. While using the tool, you must micromanage its direction, constantly giving it haptic feedback from the hands, or it goes off course.

A rototiller could be compared to a hired hand plowing himself, I guess, but there's way less micromanagement with a hired hand vs a rototiller.

Kind of like horses and cars. Horses can get you home if you're drunk. Cars can't.

The proper use of AI agentic tools is like operating heavy machinery. Juniors can really hurt themselves with it but seniors can do a lot of good. The analogy goes further: sometimes you need to get out of the backhoe and dig with smaller tools like jackhammers or just shovels. The jackhammer is like copilot -- a mid-grade power tool -- and Claude code is like the backhoe. Clunky, crude, but can get massive amounts done quickly, if that's what's needed.

skydhash|11 months ago

> Clunky, crude, but can get massive amounts done quickly, if that's what's needed.

You know what's quicker in your analogy? A spell. Or in the coding world. Template, snippets, code generators, framework, and metaprogramming. Where you abstract all the boilerplate behind a few commands. You already know the blast radius of your brute modification tools, so you no longer have to micromanage them. And it's reliable

tyleo|11 months ago

I love how new technology becomes like religion. It develops both cult followers and critics.

In that lens think the AI cult is more right than the Crypto cult. At least I can use it to do something tangible right now while crypto is still pretty useless after many years.

In some sense I think these technologies need the cults and the critics though. It’s good to have people push new things forwards even if everyone isn’t along for the ride. It’s also good to have a counter side poke holes. I think the world is better with both optimists charting new paths forwards and pessimists making sure they don’t walk right off a cliff.

LightHugger|11 months ago

Cryptocurrency allows people who are otherwise limited by draconian payment platform limitations, puritanical moralists who own credit card companies and such to buy and sell things online without being stopped. It's obviously not a good system but it provides a high effort release valve, a fallback mechanism that hopefully will undermine some of these draconian measures. Lots of people have been helped by the existence of cryptocurrency because of this, especially since lately a lot of bad actors control payment platforms and often just shut down businesses on whims.

Whether more have been helped or hurt is debatable but it certainly has a tangible, if niche, use case with real value. It certainly has no value as a store of value, though.

aorloff|11 months ago

Crypto is still pretty valuable relative to how useless it remains.

Unless you are imagining a world in which there's a global conflict and crypto isn't shut down in the first 12 months.

monsieurbanana|11 months ago

Even if I would agree with everything the article says, I have no idea how the author gets to the conclusion that junior developers will prevail because they are faster at adopting LLMs.

Didn't he just made a point about how fast the situation is evolving? I had some FOMO about ai last year, not anymore. I don't care that I don't have time to fully explore the current LLM state of the art, because in a month it will be obsolete. I'm happy waiting until it settles down.

And if their scenario ends up happening, and you can basically multiply a dev's productivity by N by paying N x K dollarinos, why would you chose a junior dev? It's cheaper, but sometimes a junior dev doesn't take longer to arrive at a solution, it never does (same for senior devs, don't get me wrong, but it happens less often).

davydm|11 months ago

the only good part was the joke about "vibecoding" (shudder what a stupid term) being like a fart and attracting flies... ok investors

still, this "ai code tools will deprecate real programming" bullshit will one day be laughed at just like how most of us laugh at shitcoin maniacs

it just takes a lot of people way too long to learn

skydhash|11 months ago

Maybe there's a different universe out there, where the code you write is not expected to work, so you can poke the LLM for a whole day to see if it barfs something out.

I spend much of the day reading and thinking and only a small portion actually writing code, because when I'm typing, I usually have a hypothetical solution that is 99% correct and I'm just bringing it to life. Or I'm refactoring. You can interrupt me at any time and I could give you the complete recipe of what I'm doing.

Which is why I don't use LLMs, because it's actually twice the work for me. Typing out the specs, then verifying and editing the given result, while I could type the code in the first place. And they suck at prototyping. Sometimes I may want to leave something in the bare state where only one incantation works, because I'm not sure of the design yet, and have a TODO comment, but they go to generate a more complicated code. Which is a pain to refactor later.

null_name|11 months ago

Yeah, call me a cynic or conservative or whatever - I'll believe it when I see it. I give very little weight to predictions about the future from AI shills, especially when they include some variant of "we're 90% there already" or "an exponential shift is imminent, if things keep improving at this rate, which they Will." Opinion discarded, create your thing and come back if/when it works.

Everything is shifting so fast right now that it hardly matters anyways. Whatever I spend time learning will be outdated in a few years (when things are predicted to get good). It does matter if you're trying to sell AI products, though. Then you gotta convince people they're missing out, their livelihood is at stake if they don't use your new thing now now now.

6510|11 months ago

I had this rather comical picture where developers finally get to experience what it is like to have someone write software for you. You get sort of what you asked for but it is obviously wrong to even the most novice user. You then change the requirements and get something entirely different but equally wrong or worse... and a new invoice. hahaha It gets more funny the more I think about it.

disambiguation|11 months ago

Everyone: telling me how great AI is.

No one: making anything great with AI.

Sourcegraph: an AI company, routinely promoting their LLM-optimism blogposts to HN, perpetuating the hype cycle their business model depends on.

rocmcd|11 months ago

This is the true litmus test IMO. If LLMs are so great and make everyone so productive, then where are the results? Where are all of the amazing products being released that otherwise would have required 10x the investment? Shouldn't there be _anything_ we can point to that shows that the "productivity needle" is being moved?

null_name|11 months ago

This sapling is twice as large as it was a week ago, which was twice again as large as it was the week before. Why, at this rate, it'll be bigger than the whole world in but a month.

Kiro|11 months ago

It's the opposite. Most people are not are boasting about their productivity improvements but it's everywhere. Unless you work at a company where you're not allowed to use these tools it should be impossible to miss. Even the most hardcore naysayers I know are now using AI tools. The new discourse is whether the massive increase in code output leads to issues or not (I think it does), but claiming it doesn't happen is not a serious take anymore.

jcgrillo|11 months ago

> We’re talking about each developer gradually boosting their productivity by a multiplier of ~5x by Q4 2025 (allowing for ramp-up time), for an additional amortized cost of only maybe $50k/year the first year. Who wouldn’t go for that deal?

OK, I'll take the other side of that bet. If in Q4 '25 devs using cursor or whatever are 5x as productive as me using emacs, I'll give this AI stuff another chance. But I'm pretty sure it won't happen.

throwaway173738|11 months ago

Oh good. When I saw the graph I started wondering what I was being sold.

shove|11 months ago

I read the whole thing, top to bottom, and by the 80% mark, I still wasn’t sure whether the piece was written in earnest or the sharpest piece of satire I’ve seen in a decade.

felideon|11 months ago

On a corporate blog, though, for a product that is competing in this space?

jsdalton|11 months ago

Much of this post was spot on — but the blind spots are highly problematic.

In this agentic AI utopia of six months from now:

* Why would developers — especially junior developers — be assigned oversight of the AI clusters? This sounds more like an engineering management role that’s very hands on. This makes sense because the skill set required for the desired outcomes is no longer “how do I write code that makes these computers work correcty” and rather “what’s the best solution for our customers and/or business in this problem space.” Higher order thinking, expertise in the domain, and dare I say wisdom are more valuable than knowing the intricacies of React hooks.

* Economically speaking what are all these companies doing with all this code? Code is still a liability, not an asset. Mere humans writing code faster than they comprehend the problem space is already a problem and the brave new world described here makes this problem worse not better. In particular here, there’s no longer an economic “moat” to build a business off of if everything can be “solved” in a day with a swarm of AI agents.

* I wonder about the ongoing term scaling of these approaches. The trade off seems to be extremely fast productivity at the start which falls off a cliff as the product matures and grows. It’s like a building that can be constructed in a day up to a few floors but quickly hits an upper limit as your ability to build _on top of_ the foundational layer of poorly understood garbage.

* Heaven help the ops / infrastructure folks who have to run this garbage and deal with issues at scale.

Btw I don’t reject everything in this post — these tools are indeed powerful and compelling and the trendlines are undeniable.

xyzzy9563|11 months ago

I think a lot of people are going to be surprised at how fast "vibe coding" with agents replaces a lot of traditional software engineering. Especially for non-critical software, but eventually where safety matters too since AI can generate tons of test cases.

LPisGood|11 months ago

There is so much hype around agents but I am still thoroughly unimpressed.

They’re fine at basic tasks, but nothing more.

kolektiv|11 months ago

And without any traditional software engineers, who's going to check that those test cases actually do anything useful and verify the important properties of the system? It doesn't matter how many unit tests your Therac-25 has if none of them test the thing that matters.

mdaniel|11 months ago

I still have to get used to the fact that a (sourcegraph.com) /item may contain (steve-yegge.blogspot.com) content

pluto_modadic|11 months ago

I'm going to guess this is satire unless sourcegraph only has "junior" developers, on "junior" pay.

The article tells me something unfortunate about the appropriateness about ever buying software from this person; based on how they write.

DanHulton|11 months ago

As always, citation needed.

(Also, grain of salt required, because this is a blatant marketing post.)

Look, I've been hearing "the models will get better and make these core problems go away" since it become common to talk about "the models" at all. Maybe they will some day! But also, and critically, maybe they won't.

You also have to consider the future where some companies spend an additional $50-100k per developer and they DON'T see any of this supposed increase in performance, if these "trust me, it'll happen this time" promises don't come true. This is the kind of bet that can CRATER companies, so it's not surprising to see some hesitation here, a desire to see if the football will be again yanked away.

Plus, and I believe most damningly, this article appears to be engaging in the classic technocratic failure mode: mistaking social problems for technical ones.

Obviously, yes, developers engage in solving technical problems, but that is not all they do, and at the higher level, that becomes the least of what they do. More and more, a good developer ensures that they are solving the RIGHT problem in the RIGHT WAY. They're consulting with managers, (ideally) users, other teams, a whole host of people to ensure the right thing is built at the right time with the right features and that the right sacrifices are being made. LLMs are classically bad at this.

The author dismissively calls this "getting stuck", and handwaves the importance of it away, saying that the engineer will be able to unstuck the model at first (which, if we're putting armies of "vibe coding" junior engineers in charge of the LLMS, who've not had time enough in their career to develop this skill, HOW?), and then makes the classic claim "but the models will get better", and predicts the models will eventually be able to do it (which, if this is an intractable problem with LLMS -- and so far evidence has been leaning this way -- again, HOW?).

Forgive that apalling grammar. I am het up. But note well what I'm doing: I'm asking "should we even be doing this?" Which is something these models a) will have to do well to accomplish what the author insinuates they will, and b) have been persistently terrible at.

I'm going to remain skeptical for now, since it seems that's my one remaining superpower versus these LLMs, and I guess I'm going to need to keep that skill sharp if I want to avoid the breadline in this author's future. =)

sfjailbird|11 months ago

LOL, pretty good, he had me going a few times. I'm sure a surprising amount of people will actually take this seriously, showing how ludicrous the situation is right now.

haburka|11 months ago

One of the funniest things I’ve read in a while. Also full of some truths. I think learning how to use AI will become a core part of being a dev but I seriously doubt they’ll have anywhere near the competency of solving a problem that a junior engineer has. They can certainly write code like one though.

I really recommend this to anyone reading - if you haven’t tried using cursor or copilot, check them out. It makes writing code less tedious.

lcnPylGDnU4H9OF|11 months ago

> We think the flat hiring right now is just companies signalling that they don’t know what to do yet.

The whole article seems to be written disingenuously for the junior developer audience but this one kinda irked me: the flat hiring is because interest rates are high and has nothing to do with companies figuring out what to do with vibe coding.

On topic, nothing in this article suggests anything fundamentally useful about vibe coding other than it being an easier way to start for juniors and entry-levels. If you are a junior, go ahead and keep vibe coding but also do your best to understand the code you’re given. I strongly suspect that will (continue to) be something that makes people stand out.

DonHopkins|11 months ago

Is it just me, or has anyone else noticed that:

1) Cursor has been crashing several times an hour for me recently.

2) Cursor seems to ignore .cursorrules files. I'm using the json format that's supposed to let you filter on file name patterns (although how that works for cross-cutting agent stuff I don't know).

3) Cursor is obsessed with making sketchy iffy defensive code checking for the most recent symptom and trying to guess and shart its way out of it instead of addressing the real problem? And it's extremely hard to talk it out of doing that, I have to keep reminding it and admonishing it to cut it the fuck out, fail instead of mitigate, address the root cause not the symptoms, and stop trying to close the barn door after all the horses have escaped. It's as of it was only trained on Stack Overflow and PHP manual page discussions.

mwkaufma|11 months ago

Nothing smells like bullshit quite like a data-less "graph"