top | item 46823725

They lied to you. Building software is hard

143 points| xiaohanyu | 1 month ago |blog.nordcraft.com

118 comments

order

gdubs|28 days ago

One of my all-time favorite quotes is from Zen Mind, Beginner's Mind and it goes: “In the beginner’s mind there are many possibilities, but in the expert’s there are few.”

There's such a wide divergence of experience with these tools. Often times people will say that anyone finding incredible value in them must not be very good. Or that they fall down when you get deep enough into a project.

I think the reality is that to really understand these tools, you need to open your mind to a different way of working than we've all become accustomed to. I say this as someone who's made a lot of software, for a long time now. (Quite successfully too!)

In someways, while the ladder may be getting pulled up on Junior developers, I think they're also poised to be able to really utilize these tools in a way that those of us with older, more rigid ways of thinking about software development might miss.

bsoles|28 days ago

Over the last 25 years of building commercial software, but being a programming enthusiast since I was 15 years old, I came to the conclusion that self-improvement (in the sense of gaining real expertise in a field, building a philosophy of things, and doing the right things) is in direct opposition to creating "value" in the corporate/commercial sense of today.

Using AI/LLMs, you perhaps will create more commercial value for yourself or your employer, but it will not make you a better learner, developer, creator, or person. Going back to the electronic calculator analogy that people like to refer to these days when discussing AI, I also now think that, yes, electronic calculators actually made us worse with being able to use our brains for complex things, which is the thing that I value more than creating profits for some faceless corporation that happens to be my employer at the moment.

phicoh|28 days ago

There have always been young people who can quickly hack something together with whatever new tools are available. That way of working never lasts, but the tools do last.

When tools prove their worth, they get taken into to normal way software is produced. Older people start using them, because they see the benefit.

The key thing about software production is that it is a discussion among humans. The computer is there to help. During a review, nobody is going to look at what assembly a compiler produces (with some exceptions of course).

When new tools arrive, we have to be able to blindly trust them to be correct. They have to produce reproducible output. And when they do, the input to those tools can become part of the conversation among humans.

(I'm ignoring editors and IDEs here for the moment, because they don't have much effect on design, they just make coding a bit easier).

In the past, some tools have been introduced, got hyped, and faded into obscurity again. Not all tools are successful, time will tell.

bdangubic|28 days ago

This reminds of talking to my nephew at Thanksgiving years ago. He was studying for an exam after the holidays and I was looking at his screen open to a Google Doc which looked like his study notes except - they were being edited as I was watching - by someone else. I asked about it and he goes “we have a single Google Doc where all students collaborate on the study notes.” My mind was blown, I was also using Google Docs but not in a millions years would it cross my mind its utility for such a thing he and his classmates were using it for. Can’t wait to see what new blood “Juniors” brings to the table!

AndreasMoeller|27 days ago

At the same time I see experience engineers pretend that everything they have learned about software development is no longer true.

3 years ago the idea of measuring productivity in lines of code would have been ridiculous. After AI, it is the norm.

mnky9800n|28 days ago

I was talking about this with someone today, that before perhaps there is an exactness you expect. But actually, what really matters is "good enough." And if AI written code takes you to "good enough" according to whatever metric you've set, then what exactly is the problem? Because a lot of the technical part of the job is taking X data, doing f(x) transformation to that data, and thus Y is born and handed to the next step. So if it passes whatever metric you have set to make sure that going from X to Y handles Z% of the problem space, and doesn't create downstream issues (probably this should be part of your metric), then you have done your job. And yes, of course sometimes the job will require you writing the code yourself because that level if precision is necessary. But why should we consider that always to be the case? And thus, actually, there are probably new programming languages and paradigms to consider that we haven't thought of yet that makes this kind of problem solving more efficient. Because right now we are not super effective at juggling both the human and the machine's problem space context. Except some experts who say they can orchestrate tens of agents all at once doing whatever. I dunno. I think right now is exciting and not hand wringing. A computer is meant to help you think. Why shouldn't new computational tools bring excitement?

commandlinefan|28 days ago

... and the biggest problem is that the people who _do_ know how hard it is to build software are the ones whose input on the matter is most likely to be discounted as "sour grapes"/"fear of obsolescence".

hn_throwaway_99|28 days ago

I definitely agree with this. Older folks have to deal with the double whammy of being familiar with what they already know, plus there is a good bit of research that learning and absorbing new things just gets harder past mid-40s or so.

That said, I don't think this negates what TFA is trying to say. The difficulty with software has always been around focusing on the details while still keeping the overall system in mind, and that's just a hard thing to do. AI may certainly make some steps go faster but it doesn't change that much about what makes software hard in the first place. For example, even before AI, I would get really frustrated with product managers a lot. Some rare gems were absolutely awesome and worth their weight in gold, but many of them just never were willing to go to the details and minutiae that's really necessary to get the product right. With software engineers, if you don't focus on the details the software often just flat out doesn't work, so it forces you to go to that level (and I find that non-detail oriented programmers tend to leave the profession pretty quickly). But I've seen more that a few situations where product managers manage to skate by without getting to the depth necessary.

xiaohanyu|1 month ago

"If you are looking for that one trick that lets you get ahead and jumpstart your career, my advice to you is: Don’t choose the path of least resistance. When training a muscle, you only get stronger with resistance. The same is true for learning any new skill. It is when you struggle with a specific problem or concept that you tend to remember."

Pretty nice description.

advisedwang|28 days ago

As with anything, there's also too much of a good thing though.

In my own career I switched role to get more time on a area where I felt I needed more growth an practice. Turns out I never got really very good at it, and basically was just in a role I wasn't great at for 6 years. It was miserable. My lesson is "if you know you are bad at something, don't make it load-bearer in your life or career".

adam_arthur|28 days ago

LLMs have clearly accelerated development for the most skilled developers.

Particularly when the human acts as the router/architect.

However, I've found Claude Code and Co only really work well for bootstrapping projects.

If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

It will probably change once the approach to large scale design gets more formalized and structured.

We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

Yes, AI will one shot crappy static sites. And you can vibe code up to some level of complexity before it falls apart or slows dramatically.

lowbloodsugar|28 days ago

>If you largely accept their edits unchanged, your codebase will accrue massive technical debt over time and ultimately slow you down vs semi-automatic LLM use.

Worse, as its planning the next change, it's reading all this bad code that it wrote before, but now that bad code is blessed input. It writes more of it, and instructions to use a better approach are outweighed by the "evidence".

Also, it's not tech debt: https://news.ycombinator.com/item?id=27990979#28010192

Sohcahtoa82|28 days ago

Agreed.

What I've found is that AI can be alright at creating a Proof of Concept for an app idea, and it's great as a Super Auto-complete, but anything with a modicum of complexity, it simply can't handle.

When your code is hundreds of thousands of lines, asking an agent to fix a bug or implement a feature based on a description of the behavior just doesn't work. The AI doesn't work on call graphs, it basically just greps for strings it thinks might be relevant to find things. If you know exactly where the bug lies, it can usually find it with context given to it, but at that point, you're just as good fixing the bug yourself rather than having the AI do it.

The problem is that you have non-coders creating a PoC, then screaming from the rooftops how amazing AI is and showing off what it's done, but then they go quiet as the realization sets in that they can't get the AI to flesh it out into a viable product. Alternatively, they DO create a product that people start paying to use, and then they get hacked because the code is horribly insecure and hard-codes API keys.

athenot|28 days ago

> We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation.

Containment of state also happens to benefit human developers too, and keep complexity from exploding.

AndreasMoeller|27 days ago

The most interesting thing for me is that I am sure it does.

I have been coding for 20+ years and I have used AI agents for coding a lot, especially for the last month and a half. I can't say for sure they make me faster.They definitely do for some tasks, but over all? I can solve some tasks really quickly, but at the same time my understanding of the code is not as good as it was before. I am much less confident that is is correct.

LLMs clearly make junior and mid level engineers faster, but it is much harder to say for Senior.

krainboltgreene|28 days ago

> LLMs have clearly accelerated development for the most skilled developers.

Have they so clearly? What's the evidence?

themafia|28 days ago

> accrue massive technical debt

The primary difference between a programmer and an engineer.

sjdixjjxs|28 days ago

> We ultimately need optimized DSLs and aggressive use of stateless sub-modules/abstractions that can be implemented in isolation to minimize the amount of context required for any one LLM invocation

Wait till you find out about programming languages and libraries!

> It will probably change once the approach to large scale design gets more formalized and structured

This idea has played out many times over the course of programming history. Unfortunately, reality doesn’t mesh with our attempts to generalize.

citelao|28 days ago

Perhaps this is a bit OT, since the article focuses more on self-development ("When training a muscle, you only get stronger with resistance"), but I wonder about the subtitle:

> Every week there seems to be a new tool that promises to let anyone build applications 10x faster. The promise is always the same and so is the outcome.

Is the second sentence true? Regardless of AI, I think that programming (game development, web development, maybe app development) is easier than ever? Compare modern languages like Go & Rust to C & C++, simply for their ease-of-compilation and execution. Compare modern C# to early C#, or modern Java to early Java, even.

I'd like to think that our tools have made things easier, even if our software has gotten commensurately more complicated. If they haven't, what's missing? How can we build better tools for ourselves?

dijit|28 days ago

I'm not sure.

Think of the Game hits from the 90's. A room full of people made games which shaped a generation. Maybe it was orders of magnitude harder then, but today, it's multiple orders of magnitude more people required to make them.

Same is true for websites. Sure, the websites were dingy with poor UX and oodles of bugs... but the size of the team required to make them was absolutely tiny compared to today.

Things are simultaneously the best they've ever been, and the worst they've ever been, it's a weird situation to be in for sure.

But truthfully; orders of magnitude more powerful hardware was the real unlock.

Why is slack and discord popular? Because it's possible to use multiple gigabytes of ram for a chat client.

25 years ago? Multiple gigabytes of ram put your machine firmly in the "I have unlimited money and am probably a server doing millions of things" class.

kerblang|28 days ago

Short answer: No, things have not gotten easier. The toolchain is insanely huge. A computer science graduate is woefully underprepared for the tool tsnuami that will completely swamp their careers. Many of these tools are worse than no tool at all.

Causes: Bubble economics, perverse incentives, lack of objectivity, and more.

The good news is that huge competitive advantages are available to those who refuse to accept norms without careful evaluation.

vbezhenar|28 days ago

Modern Java is vastly more complicated than early Java. So I'm not sure I follow your reasoning. Programming nowadays is absurdly complicated. I have 20 years of experience and I can't imagine how new developer could learn it all.

I don't really know if AI makes programming easier or harder. At one side, you can explore any topic with AI. This is super powerful ability when it comes to learning. At another side, the temptation to offload your work to AI is big and if you do that, you'll learn nothing. So it comes down to a person type, I guess. Some people will use AI to learn and some people will use AI to avoid learning, both behaviours are empowered.

I have simple and useless answer how to solve that. Throw it all out. Start from the scratch. Start with simple CPU. Start with simple OS. Start with simple protocols. Do not write frameworks. Make the number of layers between your code and hardware as small as possible. So it's actually possible to understand it all. Right now the number of abstraction layers is too big. Of course nobody's going to do that, people will put more abstraction layers and it'll work, it always works. But that sucks. Software stack was much simpler 20-30 years ago. We didn't even had source control, I was the young developer who introduced subversion into our company, but we still delivered useful software.

cess11|28 days ago

I think it is about as hard as it ever was. The tricky part is learning to think through problems in a certain way, when you have that it doesn't matter much whether you're reading hexdumps and slinging low-level code on a 68k chip or clicking about in Godot and watching videos about clicking.

Crapping out code that does the thing was never the hard part, the hard part is reading the crap someone did and changing it. There are tradeoffs here, perhaps you might invest in modeling up front and use more or less formal methods, or you're just great at executing code over and over very fast with small adjustments and interpreting the result. Either way you'll eventually produce something robust that someone else can change reasonably fast when needed.

The additions to Java and C# are a lot about functional programming concepts, and we've had those since forever way back in the sixties. Map/reduce/filter are old concepts, and every loop is just recursion with some degree of veiling, it's not a big thing whether you piece it together in assembly or Scheme, typing it out isn't where you'll spend most of your time. That'll be reading it once it's no longer yesterday that you wrote it.

If I were to invent a 10x-meganinja-dev-superpower-tool it would be focused on static and execution analysis, with strong extendability in a simple DSL or programming language, and decent visualisation API:s. It would not be 'type here to spin the wheels and see what code drops out', that part is solved many times over already, in Wordpress, JAXB oriented CRM and so on. The ability to confidently implement change in a large, complex system is enabled by deterministic immediate analysis and visualisation.

Then there are the soft skills. While you're doing it you need to keep bosses and "stakeholders" happy and make sure they do not start worrying about the things you do. So you need to communicate reliably and clearly, in a language they understand, which is commonly pictures with simple words they use a lot every day and little arrows that bring the message together. Whether you use this or that mainstream programming language will not matter at all in this.

pmichaud|28 days ago

I think as many other people who replied to you have said, it's a mixed bag. It's better in some sense, with abstractions and frameworks that sand down sharp edges, and libraries that can do everything. But it's also crushingly more complex. Back in the day you had to know and care about memory allocation and ASM, but all the knowledge you needed was in a manual or two that you owned and could actually know the contents of.

HumblyTossed|28 days ago

Having learned programming in the 80s (as a teen), I would say it was much easier back then. Programmers have made things vastly more complicated these days.

drdec|28 days ago

Maybe the outcome they had in mind was "it helps, but nowhere near 10x"?

Also, I'm not sure anyone was making 10x claims about the tools you cite.

commandlinefan|28 days ago

> programming is easier than ever

Or does it just seem that way because you've had a whole lifetime to digest it one little bit at a time so that it all seems intuitive now? If "easy to understand and get started with" were the bar for programming capability, we'd have stopped with COBOL.

matthewkayin|28 days ago

> Compare modern languages like Go & Rust to C & C++, simply for their ease-of-compilation and execution.

Except that at least for game development, C and C++ are still the go-to tools?

wrs|28 days ago

You missed the word "anyone". Of course tools for programmers have seen huge improvements. The "promise" referred to here is that you don't need to learn programming skills to be an effective programmer.

stronglikedan|28 days ago

Building software is actually so easy that my 8 year old niece can do it. Shipping software is what's hard.

giancarlostoro|28 days ago

Shipping is easy, shipping stable functional (lets lump in scalable) software on the other hand.

catoc|28 days ago

I understand the “8 year old niece” is hyperbole, but really? Everyone can build apps?

“Build me a recipe app”, sure.

Building anything substantial has consistently failed for me unless you take claude or codex by the hand and guide them through it step by step.

camnora|28 days ago

Not to mention selling it

enos_feedler|28 days ago

But who are you shipping it to if everyone is building it?

mlsu|28 days ago

Fred Brooks, from "No Silver Bullet" (1986)

> All software construction involves essential tasks, the fashioning of the complex conceptual structures that compose the abstract software entity, and accidental tasks, the representation of these abstract entities in programming languages and the mapping of these onto machine languages within space and speed constraints. Most of the big past gains in software productivity have come from removing artificial barriers that have made the accidental tasks inordinately hard, such as severe hardware constraints, awkward programming languages, lack of machine time. How much of what software engineers now do is still devoted to the accidental, as opposed to the essential? Unless it is more than 9/10 of all effort, shrinking all the accidental activities to zero time will not give an order of magnitude improvement.

AI, the silver bullet. We just never learn, do we?

raincole|28 days ago

I think software was indeed 9/10 accidental activities before AI. Probably still mostly accidental activities with the current LLM.

The essence: query all the users within a certain area and do it as fast as possible

The accident: spending an hour to survey spatial tree library, another hour debating whether to make our own, one more hour reading the algorithm, a few hours to code it, a few days to test and debug it

Many people seem to believe implementing the algorithm is "the essence" of software development so they think the essence is the majority. I strongly disagree. Knowing and writing the specific algorithm is purely accidental in my opinion.

idle_zealot|28 days ago

There are mixed views here. Some are making the claim relevant to the Silver Bullet observation, than LLMs are cutting down time spent on non-essential work. But the view that's really driving hype is that the machine can do essential work, design the system for you, and implement it, explore the possibility space and make judgments about the tradeoffs, and make decisions.

Now, can it actually do those things? Not in my estimation. But from the perspective of a less experienced developer it can sure look like it does. It is, after all, primarily a plausibility engine.

I'm all for investing in integrating these generative tools into workflows, but as of yet they should not be given agency, or even the aesthetic appearance of agency. It's too tempting to the human brain to shut down when it looks like someone or something else is driving and you're just navigating and correcting.

And eventually, with a few more breakthroughs in architecture maybe this tech actually will make digital people who can do all the programming work, and we can all retire (if we're still alive). Until then, we need to defend against sleepwalking into a future run by dumb plausibility-generators being used as accountability sinks.

didgetmaster|28 days ago

Anything (software or physical things) that is fast, easy, and cheap to build; will never be a financial success for a single company. The minute you get some market traction, your competitors will come in and take away all your customers.

charcircuit|28 days ago

If you were given a copy of the entire software stack that runs YouTube I would bet $1000000 you can't take all of YouTube's customers. Businesses are more than just the software.

prewett|28 days ago

I wonder if 3D printing is a good analogy. The promise was "you can print anything you want!" From my observation, the reality is that you can 3D print cheap plastic crap that looks like voxel rendering made manifest. This turns out to be handy in a lot of situations, like making custom jigs for something, but you're not going to be 3D printing custom jewelry, or custom furniture. Sure, you hear stories about how SpaceX is 3D printing rocket engines, but you can't afford a machine like that, and even if you could, you won't be printing custom jewelry with it.

So, sure, some people are going to be using AI to create professional software, but they aren't going to tell you about all the engines that blew up along the way, and who knows which ones are going to blow up in the future. But custom utility software might get a whole lot more common.

dfabulich|28 days ago

This article includes a graph with a negative slope, claiming that AI tools are useful for beginners, but less and less useful the more coding expertise you develop.

That doesn't match my experience. I think AI tools have their own skill curve, independent of the skill curve of "reading/writing good code." If you figure out how to use the AI tools well, you'll get even more value out of them with expertise.

Use AI to solve problems you know how to solve, not problems that are beyond your understanding. (In that case, use the AI to increase your understanding instead.)

Use the very newest/best LLM models. Make the AI use automated tests (preferring languages with strict type checks). Give it access to logs. Manage context tokens effectively (they all get dumber the more tokens in context). Write the right stuff and not the wrong stuff in AGENTS.md.

PaulRobinson|28 days ago

That sounds exhausting.

I'd rather spend my time thinking about the problem and solving it, than thinking about how to get some software to stochasticaly select language that appears like it is thinking about the problem to then implement a solution I'm going to have to check carefully.

Much of the LLM hype cycle breaks down into "anyone can create software now", which TFA makes a convincing argument for being a lie, and "experts are now going to be so much more productive", which TFA - and several studies posted here in recent months - show is not actually the case.

Your walk-through is the reason why. You've not got magic for free, you've got something kinda cool that needs operational management and constant verification.

countWSS|28 days ago

There is a point in there, long-range analysis and debugging without AI is much harder, AI spots lots of non-obvious stuff very fast. If we consider "spotting non-obvious flaws" a skill, this will atrophy as beginners will learn to use AI to scan code for flaws,it is effective but doesn't teach anything, reading long blocks of code and mentally simulating it is a incredibly valuable skill and it will find stuff AI misses(something that is too complex, e.g. nested/recursive control flow,async and co-routines/threads interacting,etc), AI goes for obvious stuff first and has to be manually pointed to "identify flaws, focusing on X".

zkmon|28 days ago

> With no-code tools you often reach a hard limit where the tool simply does not make sense to use anymore.

No-code is the same trend that has abstracted out all the generic stuff into infrastructure layers, letting the developers to focus on Lambda functions, while everything in the lower levels is config-driven. This was happening all the time, pushing the developer to easier higher layers and absorbing all complexity and algorithmic work into config-driven layers.

Runtime cost of a Lambda function might far exceed that of a fully hand-coded application hosted on your local server. But there could be other factors to consider.

Same with AI. You get a jump-start with full speed, and then you can take the wheel.

etamponi|28 days ago

The point of the article is that the jump start that AI gives you is not the same as the one that well thought frameworks give you. What AI writes falls apart and leaves you with the ruins.

jackinthehat|28 days ago

years ago I watched a very senior engineer refuse to use an IDE debugger because “real understanding means doing it in your head.” He was brilliant - and also spent two days chasing a bug a junior fixed in 10 minutes by setting a breakpoint. The junior didn’t understand more; he just had a better tool for that moment.

Tools don’t make you wiser or lazier by default — they amplify whatever habits you already have. If you’re using them to avoid thinking, that shows. If you’re using them to explore faster, that shows too.

Beginner’s mind isn’t about ignorance; it’s about being willing to try leverage where it exists.

Tiberium|28 days ago

Am I missing something or is the actual point of the article just "don't start learning programming by using AI"? The title seems very different from the content.

worik|28 days ago

> They make the simple parts of software development simpler, but the complex parts can often become more difficult.

This is so frustratingly common.

coffeefirst|28 days ago

One more thing…

The newbie prototype was never all that hard. You could, in my day, have a lot of fun that first week with dreamweaver, Visual Basic, or cargo cutting HTML.

There’s nothing wrong with this.

But to get much further than that ceiling you probably needed to crack a book.

raincole|28 days ago

I only realize how spot on the muscle training analogy is. In the modern world, very few people are hired for their muscles alone. Actually building muscle costs money for the absolute majority.

This is how I see hand-building software goes.

yowlingcat|28 days ago

Something that's been on my mind recently - what if gen AI coding tools are ultimately attention casinos in the same way social media is? You burn through tons of tokens and you pay per token, it feels productive and engaging, but ultimately the more you try and fail, the more money the vendor makes. Their expressed (though perhaps not stated) economic goal may be to keep you in the "goldilocks zone" of making enough progress to not give up, but not so much progress that you 1-shot to the end state without issues.

I'm not saying that they can actually do that per sé; switching costs are so low that if you are doing worse than an existing competitor, you'd lose that volume. Nor am I saying they are deliberately bilking folks -- I think it would be hard to do that without folks cottoning on.

But, I did see an interesting thread on Twitter that had me pondering [1]. Basically, Claude Code experimented with RAG approaches over the simple iterative grep that they now use. The RAG approach was brittle and hard to get right in their words, and just brute forcing it with grep was easier to use effectively. But Cursor took the other approach to make semantic searching work for them, which made me wonder about the intrinsic token economics for both firms. Cursor is incentivized to minimize token usage to increase spread from their fixed seat pricing. But for Claude, iterative grep bloating token usage doesn't harm them and in fact increases gross tokens purchased, so there is no incentive to find a better approach.

I am sure there are many instances of this out there, but it does make me inclined to wonder if it will be economic incentives rather than technical limitations that eventually put an upper limit on closed weight LLM vendors like OpenAI and Claude. Too early to tell for now, IMO.

[1] https://x.com/antoine_chaffin/status/2018069651532787936

Throaway1985232|28 days ago

Well, the first time i got really excited about an LlM was when it told me “yes, if you give me your game ideas and we iterate together, i can handle 100% of the coding.” lies, pure lies.

ElijahLynn|28 days ago

I read a quote from somebody in the industry recently that stuck. I don't remember who it was.

"Writing software is easy, changing it is hard."

ryandvm|28 days ago

Absolutely true. Especially so with poorly abstracted software design.

This is why so many new teams' first order of business is invariably a suggestion to "rewrite everything".

They're not going to do a better job or get a better product, it's just the only way they're going to get a software stack that does what they want.

anonymous344|28 days ago

true. I've built a simple app that solved the annoying problem usually in that app-space of giving/typing time and date. after years and years, people still pay for it, which im very grateful. i even saw many M$ companies build their products yet lacked the simple mind to ease the user ecperience with non-default date and time selector..

vineethy|28 days ago

strongly disagree with this article. I think using the tools can actually directly lead to a junior engineer getting closer to a senior engineer. Telling junior engineers that they have to get better at typing out code in order to be better engineers misses what actually makes someone a better engineer.

It's worth actually being specific about what differentiates a junior engineer from a senior engineer. There's two things: communication and architecture. the combination of these two makes you a better problem solver. talking to other people helps you figure out your blindspots and forces you to reduce complex ideas down to their most essential parts. the loop of solving a problem and then seeing how well the solution worked gives you an instinct for what works and what doesn't work for any given problem. So how do agents make you better at these two things?

If you are better at explaining what you want, you can get the agents to do what you want a lot better. So you'd end up being more productive. I've seen junior developers that were pretty good problem solvers improve their ability to communicate technical ideas after using agents.

Senior engineers develop instincts for issues down the road. So when they begin any project, they'll take this into account and work by thinking through this. They can get the agents to build towards a clean architecture from the get go such that issues are easily traceable and debuggable. Junior developers get better at architecture by using agents because they can quickly churn through candidate solutions. this helps them more rapidly learn the strengths and weaknesses of different architectures.

Thanemate|28 days ago

People don't develop the ability to solve algebraic equations when they see a professor solving it on the whiteboard. That's just the introduction to the methodology. The way people develop problem solving is by solving problems themselves.

This is why everyone's thirsty for senior/staff engineers who are AI powered right now, because their entire work experience was the typical SWE experience.

I cannot wait for the industry to have a highly skilled SWE drought in the next 5 years, so I can sweep in and become the AI powered engineer who saves the day because other junior-mid SWE's outsourced their problem solving way too early, either due to falling for the "don't be left behind" narrative (which is absurd because what about people who will get into CS in 6 years from now? Do they miss some metaphorical train?) or because their manager forced them to adopt the tools.

lowbloodsugar|28 days ago

What comes to mind is Java vs assembly. Claude is just a really really high level language compiler. I work with senior Java devs who have never written assembly.

On the learning front, I spend the weekend asking Claude questions about Rust, and then getting it to write code that achieved the result I wanted. I also now have a much better understanding of the different options because I've gotten three different working examples and gotten to tinker with them. It's a lot faster to learn how an engine works when you have a working engine on a dyno than when you have no engine. Claude built me a diesel, a gasoline and an electric engine and then I took them apart.

sgarland|28 days ago

> It's worth actually being specific about what differentiates a junior engineer from a senior engineer. There's two things: communication and architecture.

Uhhh… also skills and abilities? You won’t develop either of those by repeatedly asking an AI to solve problems for you.

Bishonen88|28 days ago

https://www.youtube.com/watch?v=7lzx9ft7uMw

^ Everything App for Personal use that I'm thinking about making public in some way

~50k loc with ~400 files. Docker, postgres, react + fastify I'd say between 15 and 20 hours of vibe coding

- Tasks, Goals, Habits

- Calendar showing all of the above with two way google sync

- Household sharing of markdown notes, goals and more

- Financial projections, spending, earning, recurring transactions and more

- Meal tracking with pics, last eaten, star rating and more

- Gantt chart for goals

- Dashboard for at a glance view

- PWA for android with layout optimizations

- Dark mode

... and more

Could've I done it in the last 5 years? Yes. It would've taken 3-4 months if not more though. Now we could talk 24/7 about whether it's clean code, super maintainable, etc. etc. The code written by hand wouldn't be either if it'd be me just doing a hobby project.

Shipping is rather straightforward as well thanks to LLM's. They hold your hand most of the way. Being a techie makes this much, much easier...

I think developers are cooked one way or another. Won't take long now. Same question asked a year ago was dramatically different. AI were helpful to some extent but couldn't code up basic things.

13415|28 days ago

I have to strongly disagree with that. I provably never was as productive as when I used REALbasic, which was a classical RAD tool. I sold the software made with it successfully for quite a while.

As most people here probably know, it's now called Xojo and in my opinion both somewhat outdated and expensive. So I'm not recommending it, but credit to were it's due and it certainly was due for early versions of REALbasic when it was still affordable shareware.

The problem with all RAD tools seems to be that they eventually morph into expensive corporate tools no matter what their origins were. I don't know any cross-platform exception (I don't count Purebasic as RAD and it's also not structured).

As for AI, it seems to be just the same. The right AI tool accelerates the easy parts so you have more time for the hard parts. Another thing that bothers me a lot when alleged "professionals" are arguing against everyday computing for everyone. They're accelerating the death of general computing platforms and in the end no one will benefit from that.

sumanep|28 days ago

Who lied to me?

1970-01-01|1 month ago

We have hard evidence of it becoming easier every damn day. AI is taking these jobs. The models aren't perfect, but the speed tradeoff is so massive that you really can't say it's "hard" to build anything anymore. Nobody is lying.

contagiousflow|28 days ago

What is the hard evidence?

Edit: What I mean by this is there may be some circumstantial evidence (less hiring for juniors, more AI companies getting VC funding). We currently have no _hard_ evidence that programming has had a substantial speed increase/deskilling from LLMs yet. Any actual __science__ on this has yet to show this. But please, if you have _hard_ evidence on this topic I would love to see it.

xmprt|28 days ago

If you think of building software as just writing the code then sure AI makes things a lot easier. But if software engineering also includes security, setting up and maintaining infrastructure, choosing the right tradeoffs, understanding how to deal with evolving requirements without ballooning code complexity, etc., then AI struggles with that at the moment.

themafia|28 days ago

> but the speed tradeoff

If you only care about a single metric you can convince yourself to make all kinds of bad decisions.

> Nobody is lying.

Nobody is being honest either. That happens all the time.

tom2948329494|28 days ago

> The problem is that while these tools can help you build a simple prototype incredibly quickly, when it comes to building functional applications they are much more limited

As someone with 0 (zero) swift skills and who has built a very well functioning iOS app purely with AI, I disagree.

AI made me infinitly faster because without it I wouldn‘t even have tried to build it.

And yes, I know the limits and security concerns and understand enough to be effective with AI.

You can build functioning applications just fine.

It‘s complexity and novel problems where AI _might_ struggle, but not every software is complex or novel.

anonymous344|28 days ago

do you make money with it? like monthly subscription? because that's my achilles heel, how to synch the mysql backend with apple's payment system so it knows when user ordered or cancelled

threethirtytwo|28 days ago

Software is the hardest thing on planet earth. That's why there's this concept of bootcamps. No other profession has this concept of "bootcamps".

Building a plane is easier than building software. That's why they don't have bootcamps for building planes or becoming a rocket engineer. Building rockets or planes as an engineer is a breeze so there's no point in making a bootcamp.

That's the awesome thing about being a swe, it's so hard that it's beyond getting a university degree, beyond requiring higher math to learn. Basically the only way to digest the concept of software is to look at these "tutorials" on the internet or have AI vibe code the whole thing (which shows how incredibly hard it is, just ask chatGPT).

My friend became a rocket engineer and he had to learn calculus, physics and all that easy stuff which university just transferred into his brain in a snap. He didn't have to go through an internet tutorial or bootcamp.