This is no different then carpentry. Yes, all furniture can now be built by machines. Some people still choose to build it by hand. Does that make them less productive? Yes. Will they ever carve furniture by hand for a business? Probably not. Can they still enjoy the act of working with the wood? Yes.
If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
I’ve heard this metaphor before and I don’t think it works well.
For one, a power tool like a bandsaw is a centaur technology. I, the human, am the top half of the centaur. The tool drives around doing what I tell it to do and helping me to do the task faster (or at all in some cases).
A GenAI tool is a reverse-centaur technology. The algorithm does almost all of the work. I’m the bottom half of the centaur helping the machine drive around and deliver the code to production faster.
So while I may choose to use hand tools in carpentry, I don’t feel bad using power tools. I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
I think this comparison isn’t quite correct. The downside with carpentry is that you only ever produce one of the thing you’re making. Factory woodwork can churn out multiple copies of the same thing in a way hand carpentry never can. There is a hard limit on output and output has a direct relationship to how much you sell.
Code isn’t really like that. Hand written code scales just like AI written code does. While some projects are limited by how fast code can be written it’s much more often things like gathering requirements that limits progress. And software is rarely a repeated, one and done thing. You iterate on the existing product. That never happens with furniture.
> If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
If you can't code by hand professionally anymore, what are you being paid to do? Bring the specs to the LLMs? Deal with the customers so the LLMs don't have to?
I'm still not sure about the productivity. Last time I asked a LLM to generate a lib for me it did it in a few second but the result took me the day to review and correct. About the same time it would take me to write it from scratch.
The reason this analogy falls down is that tools typically do one thing, do it extremely well, are extremely reliable. When I use a table saw, I know that it's going to cut this board into two pieces, exactly in this spot, and it'll do that exactly the same way every single time I use it.
You cannot tell AI to do just one thing, have it do it extremely well, or do it reliably.
And while there's a lot of opinions wrapped up in it all, it is very debatable whether AI is even solving a problem that exists. Was coding ever really the bottleneck?
And while the hype is huge and adoption is skyrocketing, there hasn't been a shred of evidence that it actually is increasing productivity or quality. In fact, in study after study, they continue to show that speed and quality actually go down with AI.
Some people like to spin their own wool, weave their own cloth, sew their own clothes.
A few even make a good living by selling their artisanal creations.
Good for them!
It's great when people can earn a living doing what they love.
But wool spinning and cloth weaving are automated and apparel is mass produced.
There will always be some skilled artisans who do it by hand, but the vast majority of decent jobs in textile production are in design, managing machines and factories, sales and distribution.
Like, do you even know how furniture is designed and built? Do you know how software is designed and built? Where is this comment even coming from? And people are agreeing with this?
A friend of mine reposted someone saying that "AI will soon be improving itself with no human intervention!!" And I tried asking my friend if he could imagine how an LLM could design and manufacture a chip, and then a computer to use that chip, and then a data center to house thousands of those computers, and he had no response.
People have no perspective but are making bold assertion after bold assertion
If this doesn't signal a bubble I don't know what does
I'm tired of the carpentry analogy. It feels like a thought stopping cliche, because it's used in every thread where this topic comes up. It misses the fact that coding is fundamentally different, and that there are still distinct advantages to writing at least some code by hand, both for the individual and the company.
Maybe a better question is: Is natural language to code what high-level programming is to hand-written assembly? Brooks claims the "essential complexity" lies in the specification: if a spec is precise enough to be executable, it’s just code by another name. But is the gap actually that large today? When I ask for a "centered 3x3 Tailwind grid", the patterns are so standardized that the ambiguity nearly vanishes. It’s like asking for a Java 8 main method. The implementation is so predictable that the intent and the code are one and the same. Or using jargons, most of the coding has a strong prior that leads to predictable posterior.
The key question now is: how far can AI go? It started with simple auto-completion, but as AI absorbs more procedural know-how, it becomes capable of generating increasingly larger chunks of maintainable code. Perhaps we are reaching a point where established patterns are so well-understood that AI can bridge the gap between a vague intent and a working system, effectively automating away what Brooks once considered essential complexity.
In the long run, this probably makes experts more valuable, but it’ll gut the demand for standard engineers. So much of our market value is currently tied to how hard it is to transfer expertise among humans. AI renders that bottleneck moot. Once the know-how is commoditized, the only thing left is the what and why.
I like programming by hand too. Like many of us here, I've been doing this for decades. I'm still proud of the work I produced and the effort I put in. For me it's a highly rewarding and enjoyable activity, just like studying mathematics.
Nevertheless, the main motivator for me has been always the final outcome - a product or tool that other people use. Using AI helps me to move much faster and frees up a lot of time to focus on the core which is building the best possible thing I can build.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
Opus 4.5 just came out around 3 months ago. We are still very early in this game. Creating things this year already makes me feel like I'm in the Enchanted Pencil (*) cartoon in which the boy draws an object with a magic pencil and makes it reality within seconds. With the collective effort of everyone involved in building the AI tools and the incentives aligned (as they are right now) the progress will continue be very rapid. You can still code by hand but it will be very hard to compete in the market without the use of AI.
There's going to be minimal "junior" jobs where you're mostly implementing - I guess roughly equivalent to working wood by hand - but there's still going to be jobs resembling senior level FAANG jobs for the foreseeable future.
Someone's going to have to do the work, babysit the algorithm, know how to verify that it actually works, know how to know that it actually does what it's supposed to do, know how to know if the people who asked for it actually knew what they were asking for, etc.
Will pay go down? Who knows. It's easy to imagine a world in which this creates MORE demand for seniors, even if there's less demand for "all SWEs" because there's almost zero demand for new juniors.
And at least for some time, you're going to need non-trivial babysitting to get anything non-trivial to "just work".
At the scale of a FAANG codebase, AI is currently not that helpful.
Sure, Gemini might have a million token context, but the larger the context th worse the performance.
This is a hard problem to solve, that has had minimal progress in what - 3 years?
If there's a MAJOR breakthrough on output performance wrt context size - then things could change quickly.
The LLMs are currently insanely good at implementing non-novel things in small context windows - mainly because their training sets are big enough that it's essentially a search problem.
But there's a lot more engineering jobs than people think that AREN'T primarily doing this.
People will only build it by hand if there is a market for it. Otherwise it'll be mostly hobby software build for learning and entertainment (opensource). Businesses don't mind factory software Although the cost of developing software is going down and will be almost free, in that case, the price is actually of taste. The best designed easy to use Interface. My guess is there will be some people, even in future, who would prefer custom handmade artisanal software written entirely by hand. Those will be probably artisanal software collectors. Maybe there will be some art galleries in future who will auction this software as art pieces to be demonstrated in museums or galleries. These will be rare events. Most commercial software will be factory made.
If I'm using the right tools for the job, I don't feel like the LLM helps outside of minor autofilling or writing quick one-off scripts. I do use LLMs heavily at work, but that's cause half the time I'm forced to use cumbersome tooling like Java w/ some boilerplatey framework or writing web backends in C++ for no performance reason.
Coding can be a joy and art like. I — speaking for myself — do feel incredibly lonely when doing it alone for long stretches. Its closer to doing graduate mathematics, especially on software that fewer and fewer know how to do well. It is also impossible to find people who would pay for _only_ beautiful code.
I agree with this analogy, as someone who professionally codes and someone who pulls out the power tools to build things around my house but uses hand tools for furniture and chairs.
No job site would tolerate someone bringing a hand saw to cut rafters when you could use a circular saw, the outcome is what matters. In the same vein, if you’re too sloppy cutting with the circular saw, you’re going to get kicked off the site too. Just keep in mind a home made from dimensional lumber is on the bottom of the precision scale. The software equivalent of a rapper’s website announcing a new album.
There are places where precision matters, building a nuclear power plant, software that runs an airplane or an insulin pump. There will still be a place for the real craftsman.
> This is no different then carpentry. Yes, all furniture can now be built by machines. Some people still choose to build it by hand. Does that make them less productive? Yes.
I take issue even with this part.
First of all, all furniture definitely can't be built by machines, and no major piece of furniture is produced by machines end to end. Even assembly still requires human effort, let alone designs (and let alone choosing, configuring, and running the machines responsible for the automable parts). So really a given piece of furniture may range from 1% machine built (just the screws) to 90%, but it's never 100 and rarely that close to the top of this range.
Secondly, there's the question of productivity. Even with furniture measuring by the number of chairs produced per minute is disingenuous. This ignores the amount of time spent on the design, ignores the quality of the final product, and even ignores its economic value. It is certainly possible to produce fewer units of furniture per unit of time than a competitor and still win on revenue, profitability, and customer sentiment.
Trying to apply the same flawed approach to productivity to software engineering is laughably silly. We automate physical good production to reduce the cost of replicating a product so we can serve more customers. Code has zero replication cost. The only valuable parts of software engineering are therefore design, quality, and other intangibles. This has always been the case, LLMs changed nothing.
The nail in the coffin moment for me when i realized AI had turned into a full blown cult was when people started equating a "hand crafted artisinal" piece of software used by a million people with hand crafted artisinal chair used by their grandma.
The cult has its origins in taylorism - a sort of investor religion dedicated to the idea that all economic activity will eventually be boiled down to ownership and unskilled labor.
> If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
Bullshit. The value in software isn't in the number of lines churned out, but in the usefulness of the resulting artifact. The right 10,000 lines of code can be worth a billion dollars, the cost to develop it is completely trivial in comparison. The idea that you can't take the time to handcraft software because it's too expensive is pernicious and risks lowering quality standards even further.
I could use AI to churn out hundreds of thousands of lines of code that doesn't compile. Or doesn't do anything useful, or is slower than what already exists. Does that mean I'm less productive?
Yes, obviously. If I'd written it by hand, it would work ( probably :D ).
I'm good with the machine milled lumber for the framing in my walls, and the IKEA side chair in my office. But I want a carpenter or woodworker to make my desk because I want to enjoy the things I interact with the most. And don't want to have to wonder if the particle board desk will break under the weight of my frankly obscene number of monitors while I'm out of the house.
I'm hopeful that it won't take my industry too long to become inoculated to the FUD you're spreading about how soon all engineers will lose their job to vibe coders. But perhaps I'm wrong, and everyone will choose the LACK over the table that last more than most of the year.
I haven't seen AI do anything impressive yet, but surely it's just another 6mo and 2B in capex+training right?
LLM’s and Agents are merely a tool to be wielded by a competent engineer. A very sophisticated tool, but a tool nonetheless. Maybe it’s because I live in the South East, as far away as I can possibly get from the echo chamber (on purpose), but I don’t see this changing anytime soon.
Not sure why you are so sure that using LLMs will be a professional requirement soon enough.
Eg in my team I heavily discourage generating and pushing generated code into a few critical repositories. While hiring, one of my points was not to hire an AI enthusiast.
I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period.
And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.
I'm fundamentally convinced that my investment into deep long term grokking of a project will allow me to surpass primarily LLM projects over the long term in raw velocity.
It also stands to reason that any task that i deem to NOT further my goal of learning or deep understanding that can be done by an LLM i will use the LLM for it. And as it turns out there are a TON of those tasks so my LLM usage is incredibly high.
> I will do what i know gives me the best possible and fastest outcome over the long term, 5-10 year period.
And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.
I'm fundamentally convinced that my deep long term understanding of a project will allow me to surpass primarily LLM projects over the long term.
I have never thought of that aspect! This is a solid point!
I love that take and sympathise deeply with it. I also have come to the conclusion to focus my manual work on those areas where I can get learning from and try to automate the rest away as much as possible.
Idk what the median lifespan of a piece of code / project / employee tenure is but probably way less than 10 years, which makes that "long term investment" pretty pointless in most cases.
This is the way. I think we’re in for some rough years at first but then what you described will settle in to the “best practice” (I hate that term). I look forward to the really bizarre bugs and incidents that make the news in the next 2-3 years. …Well as long as they’re not from my teams hah :)
If you can't deliver features faster with AI assistance then you're either using it wrong or working on very specialized software that AI can't handle yet.
This is pointing out one factor of vibecoding that is talked about too little: that it feels good, and that this feeling often clouds people's judgment on what is actually achieved (i.e. you lost control of the code and are running more and more frictionless on hopes and dreams)
It feels good to some people. Personally I have difficulty relating to that, it’s antithetical to important parts of what I value about software development. Feeling good for me comes from deeply understanding the problem and the code, and knowing how they do match up.
For me, it feels good if I get it right. But unfortunately, there are many times, even with plan mode and everything specced, where after a few hours of chipping away and refactoring the problem by the agent, I realised that I can throw the whole thing away and do over. Then it feels horrible. It feels especially horrible because it feels like you have done nothing for that time and learned nothing.
I tried writing a small utility library using Windows Copilot, just for some experience with the tach (OK, not the highest tech, but I am 73 this year) and found it mildly impressive, but quite slow compared to what I would have done myself to get some quality out of it. It didn't make me feel good, particularly.
It _does_ feel good, I know what you mean. I don’t understand why exactly but there’s def an emotion associated with vibe coding. It may be related to the feeling you get when you get some code working and finish a requirement or solve a problem. Maybe vibe coding gives you a shortcut to that endorphin. I think it’s going to be particularly important to manage that feeling and balance with reality. You know, I wonder how similar this reaction is to the endorphins from YouTube shorts or other social media. If it’s as addicting (and it’s looking that way) but requires a subscription and tied to work instead of entertainment then the justification for the billions and billions of investment dollars is obvious. Interesting times indeed.
I like "vibe doc reading" and "vibe code explanation" but am continually frustrated with vibe coding. I can certainly generate code but it's definitely not my style and I feel reluctant to put my name on it since it's frequently non trivial to completely understand and validate when you're not actually writing it. Additionally, I find vibe coding to generate very verbose and overly abstracted code that's harder to read. I have to spend time pairing the generated code back down and removing things that really weren't needed.
Conversely I feel like this is talked about a lot. I think this is a sort of essential cognitive dissonance that is present in many scenarios we're already beyond comfortable with, such as hiring consultants or off-shoring or adopting the latest hot framework. We are a species that likes things that feel good even if they're bad for us.
Yeah I get a lot of value from vibe coding and think it is the future of how we work but I’ve started to become suspicious of the pure dopamine rush it gives me. I don’t like that it is a strange combo of the sweaty feeling of playing StarCraft all night and finishing a term paper at the last minute.
I think it feels like shit, tbh. That's my biggest problem with it. The feedback on moment to moment is longer than building by myself and the almost there but not there sucks. Also, like the article states, waiting for the LLM is so fucking boring.
I'll also say that vibecoding only feels good until it doesn't. And then you realize you don't understand the huge mess of code you've just produced at all.
At least when I write by hand, I have a deep and intimate understanding of the system.
it feels good because we've turned coding into a gacha machine. you chase the high from when it works, and if it doesn't, you just throw more tokens at the problem.
> you lost control of the code and are running more and more frictionless on hopes and dreams
Your control over the code is your prompt. Write more detailed prompts and the control comes back. (The best part is that you can also work with the AI to come up with better prompts, but unlike with slop-written code, the result is bite-sized and easily surveyable.)
The most pertinent thought in this is where the author asks, "LLMs can generate decent-ish and correct-ish looking code while I have more time to do what? doomscroll?"
LLMs are not good enough for you to set and forget. You have to stay nearby babysitting it, keeping half an eye on it. That's what's so disheartening to many of us.
In my career I have mentored junior engineers and seen them rapidly learn new things and increase their capabilities. Watching over them for a shirt while is pretty rewarding. I've also worked with contract developers who were not much better than current LLMs, and like LLMs they seemed incapable of learning directly from me. Unwilling even. They were quick to say nice words like, "ok, I understand, I'll do it differently next time," but then they didn't change at all. Those were some of the most frustrating times in my career. That's the feeling I get when using LLMs for writing code.
For work, I regularly have 2-4 agents going simultaneously, churning on 1-3 features, bug fixes, doc updates.
I pop between them in the "down time", or am reviewing their output, or am preparing the requirements for the next thing, or am reviewing my coworkers MRs.
I had a colleague liken an LLM to an intern and I didn't agree with that because with a human intern you are invested in their success and there's a joy in watching them grow (when they are willing to). LLMs can't grow.
I think it is pretty indisputable that there is a valuable place for AI. I recently had to interact with a very horrible db schema. The best approach I came up with to solve my challenge involved modelling a table with 300 columns. Converting some sql ddl to a Rust struct was simple but tedious work. A prompt with less than 15 words guided an AI to produce the 900+ loc for me. It took a couple seconds to scan it to see that each field had both annotations I needed and the datatypes were sane.
That is exactly the type of help that makes me happy to have AI assistance. I have no idea how much electricity it consumed. Somebody more clever than me might have prompted the AI to generate the other 100 loc that used the struct to solve the whole problem. But it would have taken me longer to build the prompt than it took me to write the code.
Perhaps an AI might have come up with a more clever solution. Perhaps memorializing a prompt in a comment would be super insightful documentation. But I don't really need or want AI to do everything for me. I use it or not in a way that makes me happy. Right now that means I don't use it very much. Mostly because I haven't spent the time to learn how to use it. But I'm happy.
You’d have consumed probably 2+ magnitudes of energy or more just for coffee (and its growth and supply chain) to write that piece of code. Not counting the building, food, transportation…
Has anyone got any insights into what hiring software engineers looks like these days? As someone currently with a job and not hiring it is hard to imagine.
Has there been any sort of paradigm shift in coding interviews? Is LLM use expected/encouraged or frowned upon?
If companies are still looking for people to write code by hand then perhaps the author is onto something, if however we as an industry are moving on, will those who don't adapt be relegated to hobbyists?
I haven’t noticed much change yet at my firm. However, I work at a giant organization (700k+ employees) and they’re struggling to keep up. The lawyers aren’t even sure if we own the IP of agent generated code let alone the legal risk of sending client IP to the model providers.
It's obvious: companies will require both hand-coding and ai-coding skills. Job seeking has been hoop-jumping for many years, so why not one extra hoop?
Most of the hiring is happening in heavy AI coding companies, a lot of mid sized companies have freezed hiring or they are also only hiring people who claim to use AI to be 10x devs. For non-lying devs, only big companies seem to be hiring and their process hasnt changed much. you are still expect to solve leetcode and then also sit through system design.
1. The thing to be written is available online. AI is a search engine to find it, maybe also translate it to the language of choice.
2. The thing (system or component or function) is genuinely new. The spec has to be very precise and the AI is just doing the typing. This is, at best working around syntax issues, such as some hard-to-remember particular SQL syntax or something like that. The languages should be better.
3. It‘s neither new nor available online but a lot to type out and modify. The AI does all the boilerplate. This is a failure of the frameworks and languages to require so much boilerplate.
I’m really happy to see this take. It’s not the first time but it’s not said often enough. I once had the thought that anything AI can do really well is probably something that should not be being done at all. That’s an overly broad statement but I think there’s some truth in it. The grand challenge of software engineering is to find beautifully elegant and precise ways to express what we want the computer to do for us. If we can find them, it will be better to express ourselves in those ways than to prompt an AI than do it for us, much in the same way that a blog written by an LLM is not worth reading.
There's another category (possibly a subset of 1), the implementation is novel but set of requirements is known and has a set of conformance tests so tight that the LLM can basically brute force its way to a solution. See e.g. the Claude Compiler thing. In this case it's less "search engine + translator" and more of "brute force search".
I've developed at the speed of "vibecoding" long before LLMs by having highly thought-compressed tools, frameworks and snippets. Most of my applications use Model Driven Development where the data model automatically builds the application DAO/controllers/validations/migrations. The data model is the application. I find LLMs help me write procedures upon this data model even a little bit faster than I did before. But the data model is the design. Unless I turnover the entire design to the LLM, I am always the decider on the data model. I will always have more context about where I want to evolve the data model. I enjoy the data modelling aspect and want to remain in the driver seat, with LLMs as my implementer of procedures.
For me writing code is clarifying ideas, it’s an important part of the process.
Sometimes you start to see a radical way of simplifying what you want, that only happens if you are willing to transform what your requirements are if they turn out to be overly prescriptive.
I think though it is probably better for your career to churn out lines, it takes longer to radically simplify, people don’t always appreciate the effort. Plus instead if you go the other way, increase scope and time and complexity that more likely will result in rewards to you for the greater effort.
Even if Claude writes 100% code, I think there will be a bifurcation between people who are finicky about 10 lines of code. And those finicky about high level product experiences.
I think the 10 lines of code people worry their jobs now become obsolete. In cases where the code required googling how to do X with Y technology, that's true. That's just going to be trivially solvable. And it will cause us to not need as many developers.
In my experience though, the 10 lines of finicky code use case usually has specific attributes:
1. You don't have well defined requirements. We're discovering correctness as we go. We 'code' to think how to solve the problem, adding / removing / changing tests as we go.
2. The constraints / correctness of this code is extremely multifaceted. It simultaneously matters for it to be fast, correct, secure, easy to use, etc
3. We're adapting a general solution (ie a login flow) to our specific company or domain. And the latter requires us to provide careful guidance to the LLM to get the right output
It may be Claude Code around these fewer bits of code, but in these cases its still important to have taste and care with code details itself.
We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
One of the first bugs I found - and fixed - at my current job instantly made us an extra 200k/year. One line of code (potentially a one character fix?), causing a little bug nobody noticed, which I only saw because I like to comb through application logs, and caused by a peculiarity of the data. Would an LLM have written better code? Maybe. But I've seen a lot of bad code churned out by LLMs, even today. I'm not saying every line matters - particular for frontend code - but sometimes individual lines of code, or even individual characters, can be tremendously important, and not be written in any spec, not tested with all possible data combinations, or documented anywhere. At a previous job, I spent several days unraveling another one-line bug that was keeping a multi-million dollar project from running at all. Again, totally non-obvious unless you had a tremendous amount of context and were running a pretty complex system to figure it out, with a sort of tenacity the LLMs don't currently possess.
> I think the 10 lines of code people worry their jobs now become obsolete.
I'm gonna assume you think you're in the other camp, but please correct me if I'm mistaken.
I'd say I'm in the 10 lines of code camp, but I'd say that group is the least afraid of fictionalized career threat. The people that obsess over those 10 lines are the same people who show up to fix the system when prod goes down. They're the ones that change 2 lines of code to get a 35% performance boost.
It annoys me a lot when people ship broken code. Vibe coded slop is almost always broken, because of those 10 lines.
> I think there will be a bifurcation between people who are finicky about 10 lines of code. And those finicky about high level product experiences.
No ones care about a random 10 lines of code. And the focus of AI hypers on LoC is disturbing. Either the code is correct and good (allows for change later down the line) or it isn't.
> We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
It really depends on the project for me. For example,I never enjoyed writing react code (or really any UI), just the outcome of my idea materializing in a usable interface. There is nothing creative or fun for me in almost any UX framework. It’s just a ton of predictable typing (now we need a fricking box. And another box. And another stupid box…) I’m more than happy outsourcing that. However, my thoughts are too random and imprecise that actually outsourcing it before to another person always felt disrespectful to them. I don’t have to worry about that with AI. My company is paying it, and when I’m prototyping a react thing every now and then, I burn few thousand dollars a day for the lols.
If they don’t like it, take it away. I just won’t do that part because I have no interest in it. Some other parts of the project, I do enjoy working on by hand. At least setting up the patterns I think will result in simple readable flow, reduce potential bugs, etc. AI s not great at that. It’s happy to mix strings, nulls, bad type castings, no separation of concerns, no small understandable functions, no reusable code, etc. which is th part i enjoy thinking about
Same with gui. I’m making a web gui that’s very specific for a project that I’m working on. My team finds it very useful but I would never make that thing without AI assistance, combination of I don’t find it interesting or fun, would take too long, I am not familiar with web gui stuff.
I like writing code that I don't have time pressure around, as well as the kind where I can afford to fail and use that as a learning experience. Especially the code that I can structure myself.
I sometimes dread writing code that's in a state of bad disrepair or is overly complex, think a lot of the "enterprise" code out there - it got so bad that I more or less quit a job over it, though never really stated that publicly, alongside my mind going dark places when you have pressure to succeed but the circumstances are stacked against you.
For a while I had a few Markdown files that went into detail exactly why I hated it, in addition to also being able to point my finger at a few people responsible for it. I tried approaching it professionally, but it never changed and the suggestions and complaints largely fell on deaf ears. Obviously I've learnt that while you can try to provide suggestions, some people and circumstances will never change, often it's about culture fit.
But yeah, outsource all of that to AI, don't even look back. Your sanity is worth more than that.
I wonder if some of the divide in the LLM-code discourse is between people who have mostly/always worked in jobs where they have the time and freedom to do things correctly, and to go back and fix stuff as they go, vs people who have mostly not (and instead worked under constant unrealistic time pressure, no focus on quality, API design, re-factoring, etc)
Yea my job as a SWE is to have a correct mental model of the code and bing it with me everywhere I go... meetings, feature design, debugging sessions. Lines of code written is not unimportant, but matters way less when you look at the big picture
I am happy someone else is also talking about addictive nature of vibe coding and its gambling-esque rewards. Would we see agentic programmers begging for tokens on kickstarter in future? That would be funny.
I very much enjoy the actively of writing code. For me, programming is pure stress relief. I love the focus and the feeling flow, I love figuring out an elegant solution, I love tastefully structuring things based on my experience of what concerns matter, etc.
Despite the AI tools I still do that: I put my effort into the areas of the code that count, or that offer intellectually stimulating challenge, or where I want to make sure to explore manually think my way into the problem space and try out different API or structure ideas.
In parallel to that I keep my background queue of AI agents fed with more menial or less interesting tasks. I take the things I learn in my mental "main thread" into the specs I write for the agents. And when I need to take a break on my mental "main thread" I review their results.
IMHO this is the way to go for us experienced developers who enjoy writing code. Don't stop doing that, there's still a lot of value in it. Write code consciously and actively, participate in the creation. But learn to utilize and keep busy agents in parallel or when you're off-keyboard. Delegate, basically. There's quite a lot of things they can do already that you really don't need to do because the outcome is completely predictable. I feel that it's possible to actually increase the hours/day focussing on stimulating problems that way.
The "you're just mindlessly prompting all day" or "the fun is gone" are choices you don't need to be making.
There’s been a new category of writings the last year. The AI Inevitability Soothsaying.[1]
There’s talk of war in the state of Nationstan. There are two camps: those who think going to war is good and just, and those who think it is not practical. Clearly not everyone is pro-war. There are two camps. But the Overton Window is defined with the premise that invading another country is a right that Nationstate has and can act on. There are by definition (inside the Overton Window) no one who is anti-war on the principle that the state has no right to do it.[2]
Not all articles in this AI category are outright positive. They range from the euphoric to the slightly depressed. But they share the same premise of inevitability; even the most negative will say that, of course I use AI, I’m not some Luddite[3]! It is integral to my work now. But I don’t just let it run the whole game. I copy–paste with judicious care. blah blah blah
The point of any Overton Window is to simulate lively debate within the confines of the premises.
And it’s impressive how many aspects of “the human” (RIP?) it covers. Emotions, self-esteem, character, identity. We are not[4] marching into irrelevance without a good consoling. Consolation?
This was taken from the formerly famous (and controversial among Khmer Rouge obsessed) Chomsky, now living in infamy for obvious reasons.
[3] Many paragraphs could be written about this
[4] We. Well, maybe me and others, not necessarily you. Depending on your view of whether the elites or the Mensa+ engineers will inherit the machines.
I said something similar in a different thread but the joy of actually physically writing code is the main reason why I became a software developer. I think there is some beauty to writing code. I enjoy typing the syntax, the interaction with my IDE, debugging by hand (and brain) rather than LLM, even if it's less efficient. I still use AI, but I do find it terribly sad that this type of more "manual" programming seems to be being forced out.
I also enjoy walking more than driving, but if I had to travel 50 miles every day for my job, I would never dream of going on foot. Same goes for AI for me. If I can finish a project in half the time or less, I still feel enough accomplishment and on top of that I will use the gained free time for self actualisation. I like my job and I love coding and solving challenging problems, but I also love tons of other stuff that could use more of my attention. AI has created an insane net positive value for me so far. And I see tons of other people who could also benefit from it the same way, if only they spent a bit more time learning how to use it effectively. Considering how everyone and their uncle thinks they need to chime in on what AI is or is not or what it can or can not do, I find most people have frustratingly little insight into what you can actually do already. Even the people working at companies like Amazon or MS who claim to work on AI integrations sometimes seem to be missing some essentials.
Feel hand/human written code of an experienced individual should be more valuable for a business than one created by agents. Surely, agents and humans might be using the same underlying frameworks or programming languages, but the value difference depends on the breadth and depth of experience. Agents gives you the breadth but an experienced individuals give you the depth in understanding/problem solving.
It's one of the factors, especially when you consider it not just as one of the factors ethically, but also because their input is valued and if they are not happy it means something might be operationally wrong (although of course there might be a tradeoff between productivity and worker happiness)
Seems like the author has a case of all or nothing. The real power in agentic programming, to me, is not in extremes, but in that you are still actively present. You don't give it world-size things to do, but byte-sized, and you constantly steer it. It's to be detailed enough to produce quality, and to be aware of everything it produces, but not so detailed that it makes sense to just write the code yourself. It's a delicate balance, but once you've found it, incredibly powerful. Especially mixed with deterministic self-checking tools (like some MCP's).
If you "set and forget", then you are vibe coding, and I do not trust for a second that the output is quality, or that you'd even know how that output fits into the larger system. You effectively delegate away the reason you are being paid onto the AI, so why pay you? What are you adding to the mix here? Your prompting skills?
Agentic programming to me is just a more efficient use of the tools I already used anyway, but it's not doing the thinking for me, it's just doing the _doing_ for me.
I am with you and fully agree with your "it does not have to be an all or nothing" stance. A remark on one part of your comment:
> What are you adding to the mix here? Your prompting skills?
The answer to that is an unironic and dead-serious "yes!".
My colleagues use Claude Opus and it does an okay job but misses important things occasionally. I've had one 18-hour session with it and fixed 3 serious but subtle and difficult to reproduce bugs. And fixed 6-7 flaky tests and our CI has been 100% green ever since.
Being a skilled operator is an actual billable skill IMO. And that will continue to be the case for a while unless the LLM companies manage to make another big leap.
I've personally witnessed Opus do world-class detective work. I even left it unattended and it churned away on a problem for almost 5h. But I spent an entire hour before that carefully telling it its success criteria, never to delete tests, never to relax requirements X & Y & Z, always to use this exact feedback loop when testing after it iterated on a fix, and a bunch of others.
In that ~5h session Opus fixed another extremely annoying bug and found mistakes in tests and corrected them after correcting the production code first and making new tests.
Opus can be scary good but you must not handwave anything away.
I found love for being an architect ever since I started using the newest generation [of scarily smart-looking] LLMs.
I also came to a pretty simple understanding over the years. If I'm coding and making progress on a project, I'm happy. If I'm not, or I'm stuck on something, I'm unhappy. This is a profoundly unhealthy way to live because life will pass you by. There is more to our existence than work, or even hobbies. And if AI lets me get more time for that, I am happier than ever.
This is great in theory, but answer me sincerely: are you spending less time at work because of AI? Because I reckon for most programmers here it is not the case at all.
Yes but is AI really getting you unstuck or are you playing a game of whack-a-mole where it fixes one bug and generates several others that you are unaware off (just one example)?
I find it helps me just forced to be focused on a task for a few hours. Just the blocked out attention I spend on it will help refine and discover new problems and angles etc. I don't think just blocking out the time without actually trying to code it (staring at a wall) is as effective.
I am TL of an Android app with dozens of screens that expose hundreds of different distinct functions. My task is to expose all of these functions as appfunctions that can be called by an LLM in response to free form user requests. My current plan is to build a little LangGraph pipeline where first step is AI documenting all functions in each app's fragment, second step is extracting them into app functions, then refactoring fragment to call app functions etc. And by build I mean Gemini will build it for me and I will ask for some refinement and edit prompts.
I also like writing code by hand, I just don't want to maintain other people's code. LMK if you need a job referral to hand refactor 20K lines of code in 2 months. Do you also enjoy working on test coverage?
> The process of writing code helps internalize the context and is easier for my brain to think deeply about it.
True, and you really do need to internalize the context to be a good software developer.
However, just because coding is how you're used to internalizing context doesn't mean it's the only good way to do it.
(I've always had a problem with people jumping into coding when they don't really understand what they are doing. I don't expect LLMs to change that, but the pernicious part of the old way is that the code -- much of it developed in ignorance -- became too entrenched/expensive to change in significant ways. Perhaps that part will change? Hopefully, anyway.)
I have a problem vibecoding and frustrating experiences with coworkers that are a little too much into it.
I can use AI to help me explore libraries or to replace a search, generate small snippets here and there, or even scripts that i occasionally need. But i can't vibecode, i don't know how to let go, i babysit too much, i read the code and i feel uneasy if I don't understand what I'm building, or why I'm building it in a certain way, i need to understand how the pieces work to make a whole
> “vibe coding has an addictive nature to it, you write some instructions, and code that looks correct is generated. Bam! Dopamine hit! If the code isn’t correct, then it’s just one prompt away from being correct”
My wife and my dad enjoy assembling furniture (the former free style, the latter off the instructions). I like the furniture assembled but I cannot stand doing it. Some of us are one way and others are the other way.
For me, LLMs are joyful experiences. I think of ideas and they make them happen. Remarkable and enjoyable. I can see how someone who would rather assemble the furniture, or perhaps build it, would like to do that.
>Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
You don't ship it, the AI does. You're just the middleman, a middleman they can eventually remove altogether.
>Now, I would be lying if I said I didn’t use LLMs to generate code. I still use Claude, but I do so in a more controlled manner.
"I can quit if I want"
>Manually giving claude the context forces me to be familiar with the codebase myself, rather than tell it to just “cook”. It turns code generation from a passive action to a deliberate thoughtful action. It also keeps my brain engaged and active, which means I can still enter the flow state. I have found this to be the best of both worlds and a way to preserve my happiness at work.
And then soon the boss demands more output, like the guys who left it all to Claude and even run 5x in parallel give.
every day there's a thread about this topic and the discussions always circle around the same arguments.
I think we should be worrying about more urgent things, like a worker doing the job of three people with ai agents, the mental load that comes with that, how much of the disruption caused by ai will disproportionately benefit owners rather than employees, and so on.
I like to code then when I resolved the solution to its completeness, I then know what to ask an llm to provide a solution to the same problem. This sometimes provides a different approach to the solution, other times, it provides variations to my solution thus improving upon my solid code solution.
I’m in a similar camp to the OP. For me, my joy doesn’t come from building - it comes from understanding. Which incidentally has actually made SWE not a great career path for me because I get bored building features, but that’s another story…
For me, LLMs have been a tremendous boon for me in terms of learning.
Initially I felt like this but now I've changed. Now I realise a lot of grunt work doesn't need to be done by me, i can direct llm to make changes. I can also experiment more as I'm able to build complex features, try it out and delete it without feeling too bad.
It's a phenomenon you see in a lot of crafts. We enjoy the craft, but when it becomes all about the product and we optimize for that, the fun goes away.
I suspect this is like the invention of the car. Some people just love riding horses, so they’ll keep doing it as a hobby. The rest of us are fine with a car.
> Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
I am not responsible for choosing whether the code I write using a for loop or while loop. I am responsible for whether my implementation - code, architecture, user experience - meets the functional and non functional requirements. It’s been well over a decade that my responsibilities didn’t require delegation to other developers doing the work or even outsourcing an entire implementation to another company like a SalesForce implementation.
When I got my first job long ago, I found that code review does involve arguing over things like for vs while loop, or having proper grammar in comments. Thought about quitting for a sec.
Now that I have more experience and manage other SWEs, I was right, that stuff was dumb and I'm glad that nobody cares anymore. I'll spend the time reviewing but only the important things.
Of course. Almost everyone who knows how to ride a horse, is happier riding a horse than driving a car too. Or hell, in decent weather even a bike.
In fact, it's even worse - driving a car is one of the least happy modes of getting around there is. And sure, maybe you really enjoy driving one. You're a rare breed when it comes down to it.
Yet it's responsible by far for the most people-distance transported every day.
Basically describes how i use Claude Code now. I'll let it do stuff i don't want to do, like setting up mocks for unit tests (boring) or editing GitHub actions yaml (torture). But otherwise, i like to let it show me how to do something I'm not sure how to do, and then I'll just go do it myself. (If i have a clear idea of how i want to go something already, i just do it myself I'm the first place)
I almost never agree with the names Claude chooses, i despise the comments it adds every other line despite me telling it over and over and over not to, oftentimes i catch the silly bugs that look fine at first glance when you just let Claude write its output direct to the file.
It feels like a good balance, to me. Nobody on my team is working drastically faster than me, with or without AI. It very obviously slows down my boss (who just doesn't pay attention and has to rework everything twice) or some of the juniors (who don't sufficiently understand the problem to begin with). I'll be more productive then them even if i am hand-writing most of the code. So i don't feel threatened by this idea that "hand written code will be something nobody does professionally here soon" -- like the article said, if I'm responsible for the code i submit, I'm still the bottleneck, AI or not. The time i spend writing my own code is time I'm not poring over AI output trying to verify that it's actually correct, and for now that's a good trade.
A lot of this discussion is just sort of moot because the cold hard calculus of economics will dictate the future of AI coding. If it turns out it's just a cognitive burden that makes programmers worse, the bubble will pop and eventually the companies that move away from the technology will come out on top. If it turns out to make software engineering much more efficient, it will become the de factor standard and you will become obsolete as a professional engineer (at least at the vast majority of employers) regardless of how you feel about it. How you wish to code in your free time is up to you and one that doesn't really warrant an argument one way or the other since there is no wrong answer.
Programming is a creative work. Replacing human creativity with pseudo parrot code generation impacts this process in bad ways. It's same reason many artists despise using AI for art.
Bean counters don't care about creativity and art though, so they'll never get it.
Good for artists I guess, I wouldn't know because I am not one. The best I can manage is drawing a stick figure of a cat. Years back I was working on a Mac app and I needed an icon. So I talked to an artist and she asked for $5K to make one for me. I couldn't justify spending so much on a hobby that I didn't know if it would go anywhere so I wrote a little app that procedurally generated me some basic sucky icon. I am sure Gordon Ramsay is also not impressed with cooking skills of my microwave, I just don't know how his objections practically relate to getting me fed daily.
> “What’s the point of it all?” I thought, LLMs can generate decent-ish and correct-ish looking code while I have more time to do what? doomscroll?
You could look back throughout human history at the inventions that made labor more efficient and ask the same question. The time-savings could either result in more time to do even more work, or more time to keep projects on pace at a sane and sustainable rate. It's up to us to choose.
I hate typing strings of syntax. So boring. Never saw the appeal. I do like tinkering with ideas, concepts, structure... just not the mechanical interaction part. Im not tbe best typist...then again, its the same with playing factorio. I love the concept of building structures, but fighting the UI to communicate my ideas is such a drag...
Bro discovered that using a calculator makes him happier doing long division by hand and decided the rest of us are just dopamine junkies for enjoying tools that actually scale.
woeirua|21 days ago
If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
agentultra|21 days ago
For one, a power tool like a bandsaw is a centaur technology. I, the human, am the top half of the centaur. The tool drives around doing what I tell it to do and helping me to do the task faster (or at all in some cases).
A GenAI tool is a reverse-centaur technology. The algorithm does almost all of the work. I’m the bottom half of the centaur helping the machine drive around and deliver the code to production faster.
So while I may choose to use hand tools in carpentry, I don’t feel bad using power tools. I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
It’s a bit different.
afavour|21 days ago
Code isn’t really like that. Hand written code scales just like AI written code does. While some projects are limited by how fast code can be written it’s much more often things like gathering requirements that limits progress. And software is rarely a repeated, one and done thing. You iterate on the existing product. That never happens with furniture.
candiddevmike|21 days ago
If you can't code by hand professionally anymore, what are you being paid to do? Bring the specs to the LLMs? Deal with the customers so the LLMs don't have to?
ngruhn|21 days ago
There are few skills that are both fun and highly valued. It's disheartening if it stops being highly valued, even if you can still do it in private.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
I'm not pretending. I'm only sad.
JodieBenitez|21 days ago
falloutx|21 days ago
cline6|21 days ago
You cannot tell AI to do just one thing, have it do it extremely well, or do it reliably.
And while there's a lot of opinions wrapped up in it all, it is very debatable whether AI is even solving a problem that exists. Was coding ever really the bottleneck?
And while the hype is huge and adoption is skyrocketing, there hasn't been a shred of evidence that it actually is increasing productivity or quality. In fact, in study after study, they continue to show that speed and quality actually go down with AI.
panarky|21 days ago
A few even make a good living by selling their artisanal creations.
Good for them!
It's great when people can earn a living doing what they love.
But wool spinning and cloth weaving are automated and apparel is mass produced.
There will always be some skilled artisans who do it by hand, but the vast majority of decent jobs in textile production are in design, managing machines and factories, sales and distribution.
krupan|21 days ago
A friend of mine reposted someone saying that "AI will soon be improving itself with no human intervention!!" And I tried asking my friend if he could imagine how an LLM could design and manufacture a chip, and then a computer to use that chip, and then a data center to house thousands of those computers, and he had no response.
People have no perspective but are making bold assertion after bold assertion
If this doesn't signal a bubble I don't know what does
Trasmatta|21 days ago
gedy|21 days ago
hintymad|21 days ago
The key question now is: how far can AI go? It started with simple auto-completion, but as AI absorbs more procedural know-how, it becomes capable of generating increasingly larger chunks of maintainable code. Perhaps we are reaching a point where established patterns are so well-understood that AI can bridge the gap between a vague intent and a working system, effectively automating away what Brooks once considered essential complexity.
In the long run, this probably makes experts more valuable, but it’ll gut the demand for standard engineers. So much of our market value is currently tied to how hard it is to transfer expertise among humans. AI renders that bottleneck moot. Once the know-how is commoditized, the only thing left is the what and why.
pawelwentpawel|21 days ago
Nevertheless, the main motivator for me has been always the final outcome - a product or tool that other people use. Using AI helps me to move much faster and frees up a lot of time to focus on the core which is building the best possible thing I can build.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
Opus 4.5 just came out around 3 months ago. We are still very early in this game. Creating things this year already makes me feel like I'm in the Enchanted Pencil (*) cartoon in which the boy draws an object with a magic pencil and makes it reality within seconds. With the collective effort of everyone involved in building the AI tools and the incentives aligned (as they are right now) the progress will continue be very rapid. You can still code by hand but it will be very hard to compete in the market without the use of AI.
(*) It's a Polish cartoon from the 60s/70s (no language barrier) - https://www.youtube.com/watch?v=-inIMrU1t7s*
onlyrealcuzzo|21 days ago
There's going to be minimal "junior" jobs where you're mostly implementing - I guess roughly equivalent to working wood by hand - but there's still going to be jobs resembling senior level FAANG jobs for the foreseeable future.
Someone's going to have to do the work, babysit the algorithm, know how to verify that it actually works, know how to know that it actually does what it's supposed to do, know how to know if the people who asked for it actually knew what they were asking for, etc.
Will pay go down? Who knows. It's easy to imagine a world in which this creates MORE demand for seniors, even if there's less demand for "all SWEs" because there's almost zero demand for new juniors.
And at least for some time, you're going to need non-trivial babysitting to get anything non-trivial to "just work".
At the scale of a FAANG codebase, AI is currently not that helpful.
Sure, Gemini might have a million token context, but the larger the context th worse the performance.
This is a hard problem to solve, that has had minimal progress in what - 3 years?
If there's a MAJOR breakthrough on output performance wrt context size - then things could change quickly.
The LLMs are currently insanely good at implementing non-novel things in small context windows - mainly because their training sets are big enough that it's essentially a search problem.
But there's a lot more engineering jobs than people think that AREN'T primarily doing this.
pankajdoharey|18 days ago
morshu9001|21 days ago
itissid|21 days ago
MarkMarine|21 days ago
No job site would tolerate someone bringing a hand saw to cut rafters when you could use a circular saw, the outcome is what matters. In the same vein, if you’re too sloppy cutting with the circular saw, you’re going to get kicked off the site too. Just keep in mind a home made from dimensional lumber is on the bottom of the precision scale. The software equivalent of a rapper’s website announcing a new album.
There are places where precision matters, building a nuclear power plant, software that runs an airplane or an insulin pump. There will still be a place for the real craftsman.
batshit_beaver|21 days ago
I take issue even with this part.
First of all, all furniture definitely can't be built by machines, and no major piece of furniture is produced by machines end to end. Even assembly still requires human effort, let alone designs (and let alone choosing, configuring, and running the machines responsible for the automable parts). So really a given piece of furniture may range from 1% machine built (just the screws) to 90%, but it's never 100 and rarely that close to the top of this range.
Secondly, there's the question of productivity. Even with furniture measuring by the number of chairs produced per minute is disingenuous. This ignores the amount of time spent on the design, ignores the quality of the final product, and even ignores its economic value. It is certainly possible to produce fewer units of furniture per unit of time than a competitor and still win on revenue, profitability, and customer sentiment.
Trying to apply the same flawed approach to productivity to software engineering is laughably silly. We automate physical good production to reduce the cost of replicating a product so we can serve more customers. Code has zero replication cost. The only valuable parts of software engineering are therefore design, quality, and other intangibles. This has always been the case, LLMs changed nothing.
VladVladikoff|21 days ago
pydry|21 days ago
The cult has its origins in taylorism - a sort of investor religion dedicated to the idea that all economic activity will eventually be boiled down to ownership and unskilled labor.
thesz|21 days ago
kryptiskt|21 days ago
Bullshit. The value in software isn't in the number of lines churned out, but in the usefulness of the resulting artifact. The right 10,000 lines of code can be worth a billion dollars, the cost to develop it is completely trivial in comparison. The idea that you can't take the time to handcraft software because it's too expensive is pernicious and risks lowering quality standards even further.
grayhatter|21 days ago
I could use AI to churn out hundreds of thousands of lines of code that doesn't compile. Or doesn't do anything useful, or is slower than what already exists. Does that mean I'm less productive?
Yes, obviously. If I'd written it by hand, it would work ( probably :D ).
I'm good with the machine milled lumber for the framing in my walls, and the IKEA side chair in my office. But I want a carpenter or woodworker to make my desk because I want to enjoy the things I interact with the most. And don't want to have to wonder if the particle board desk will break under the weight of my frankly obscene number of monitors while I'm out of the house.
I'm hopeful that it won't take my industry too long to become inoculated to the FUD you're spreading about how soon all engineers will lose their job to vibe coders. But perhaps I'm wrong, and everyone will choose the LACK over the table that last more than most of the year.
I haven't seen AI do anything impressive yet, but surely it's just another 6mo and 2B in capex+training right?
tasuki|21 days ago
If the local bakery can sell expensive artisanal brioches, surely the programmers can sell expensive artisanal ones and zeroes!
unknown|21 days ago
[deleted]
ainiro|21 days ago
Psst ==> https://www.youtube.com/watch?v=k6eSKxc6oM8
MY project (MIT licensed) ...
oompydoompy74|21 days ago
zozbot234|21 days ago
blks|21 days ago
Eg in my team I heavily discourage generating and pushing generated code into a few critical repositories. While hiring, one of my points was not to hire an AI enthusiast.
panzi|21 days ago
acedTrex|21 days ago
And that remains largely neovim and by hand. The process of typing code gives me a deeper understanding of the project that lets me deliver future features FASTER.
I'm fundamentally convinced that my investment into deep long term grokking of a project will allow me to surpass primarily LLM projects over the long term in raw velocity.
It also stands to reason that any task that i deem to NOT further my goal of learning or deep understanding that can be done by an LLM i will use the LLM for it. And as it turns out there are a TON of those tasks so my LLM usage is incredibly high.
lazyfolder|21 days ago
I have never thought of that aspect! This is a solid point!
tablatom|21 days ago
wazHFsRy|21 days ago
lokar|21 days ago
I generally frame this as: Are you optimizing for where you will be in 6 months, or 2 years?
OptionOfT|21 days ago
mupuff1234|21 days ago
chasd00|21 days ago
redox99|21 days ago
rf15|21 days ago
layer8|21 days ago
lazyfolder|21 days ago
wazHFsRy|21 days ago
zabzonk|21 days ago
I tried writing a small utility library using Windows Copilot, just for some experience with the tach (OK, not the highest tech, but I am 73 this year) and found it mildly impressive, but quite slow compared to what I would have done myself to get some quality out of it. It didn't make me feel good, particularly.
chasd00|21 days ago
nijave|21 days ago
xyzzy_plugh|21 days ago
We don't stand a chance and we know it.
sp1nningaway|21 days ago
zanellato19|21 days ago
Trasmatta|21 days ago
At least when I write by hand, I have a deep and intimate understanding of the system.
adelie|21 days ago
zozbot234|21 days ago
Your control over the code is your prompt. Write more detailed prompts and the control comes back. (The best part is that you can also work with the AI to come up with better prompts, but unlike with slop-written code, the result is bite-sized and easily surveyable.)
krupan|21 days ago
LLMs are not good enough for you to set and forget. You have to stay nearby babysitting it, keeping half an eye on it. That's what's so disheartening to many of us.
In my career I have mentored junior engineers and seen them rapidly learn new things and increase their capabilities. Watching over them for a shirt while is pretty rewarding. I've also worked with contract developers who were not much better than current LLMs, and like LLMs they seemed incapable of learning directly from me. Unwilling even. They were quick to say nice words like, "ok, I understand, I'll do it differently next time," but then they didn't change at all. Those were some of the most frustrating times in my career. That's the feeling I get when using LLMs for writing code.
spicyusername|21 days ago
I pop between them in the "down time", or am reviewing their output, or am preparing the requirements for the next thing, or am reviewing my coworkers MRs.
Plenty to do that isn't doom scrolling.
lazyfolder|21 days ago
freeopinion|21 days ago
That is exactly the type of help that makes me happy to have AI assistance. I have no idea how much electricity it consumed. Somebody more clever than me might have prompted the AI to generate the other 100 loc that used the struct to solve the whole problem. But it would have taken me longer to build the prompt than it took me to write the code.
Perhaps an AI might have come up with a more clever solution. Perhaps memorializing a prompt in a comment would be super insightful documentation. But I don't really need or want AI to do everything for me. I use it or not in a way that makes me happy. Right now that means I don't use it very much. Mostly because I haven't spent the time to learn how to use it. But I'm happy.
lokar|21 days ago
I've spent a lot of my career cleaning up stuff like that, I guess with AI we just stop caring?
nijave|21 days ago
Really I'd rather have AI generate a codegen script that deterministically does the struct from schema generation
I've had enough instances where it's slid in a subtle change like adding "ing" to a field name to not fully trust it
tomaskafka|21 days ago
Us humans are expensive part of the machine.
wtetzner|21 days ago
skzizjj|21 days ago
[deleted]
CurleighBraces|21 days ago
Has there been any sort of paradigm shift in coding interviews? Is LLM use expected/encouraged or frowned upon?
If companies are still looking for people to write code by hand then perhaps the author is onto something, if however we as an industry are moving on, will those who don't adapt be relegated to hobbyists?
chasd00|21 days ago
It’s going to take a while.
raincole|21 days ago
falloutx|21 days ago
woeirua|21 days ago
doppeldanger|21 days ago
[deleted]
throw7272855|21 days ago
1. The thing to be written is available online. AI is a search engine to find it, maybe also translate it to the language of choice.
2. The thing (system or component or function) is genuinely new. The spec has to be very precise and the AI is just doing the typing. This is, at best working around syntax issues, such as some hard-to-remember particular SQL syntax or something like that. The languages should be better.
3. It‘s neither new nor available online but a lot to type out and modify. The AI does all the boilerplate. This is a failure of the frameworks and languages to require so much boilerplate.
tablatom|21 days ago
krackers|21 days ago
nijave|21 days ago
rorylaitila|21 days ago
drob518|21 days ago
utopiah|21 days ago
Isamu|21 days ago
I think though it is probably better for your career to churn out lines, it takes longer to radically simplify, people don’t always appreciate the effort. Plus instead if you go the other way, increase scope and time and complexity that more likely will result in rewards to you for the greater effort.
softwaredoug|21 days ago
I think the 10 lines of code people worry their jobs now become obsolete. In cases where the code required googling how to do X with Y technology, that's true. That's just going to be trivially solvable. And it will cause us to not need as many developers.
In my experience though, the 10 lines of finicky code use case usually has specific attributes:
1. You don't have well defined requirements. We're discovering correctness as we go. We 'code' to think how to solve the problem, adding / removing / changing tests as we go.
2. The constraints / correctness of this code is extremely multifaceted. It simultaneously matters for it to be fast, correct, secure, easy to use, etc
3. We're adapting a general solution (ie a login flow) to our specific company or domain. And the latter requires us to provide careful guidance to the LLM to get the right output
It may be Claude Code around these fewer bits of code, but in these cases its still important to have taste and care with code details itself.
We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
grumple|21 days ago
grayhatter|21 days ago
I'm gonna assume you think you're in the other camp, but please correct me if I'm mistaken.
I'd say I'm in the 10 lines of code camp, but I'd say that group is the least afraid of fictionalized career threat. The people that obsess over those 10 lines are the same people who show up to fix the system when prod goes down. They're the ones that change 2 lines of code to get a 35% performance boost.
It annoys me a lot when people ship broken code. Vibe coded slop is almost always broken, because of those 10 lines.
skydhash|21 days ago
No ones care about a random 10 lines of code. And the focus of AI hypers on LoC is disturbing. Either the code is correct and good (allows for change later down the line) or it isn't.
> We may weirdly be in a case where it's possible to single-shot a slack clone, but taking time to change the 2 small features we care about is time consuming and requires thoughtfulness.
You do remember how easy it is to do `git clone`?
eddythompson80|21 days ago
If they don’t like it, take it away. I just won’t do that part because I have no interest in it. Some other parts of the project, I do enjoy working on by hand. At least setting up the patterns I think will result in simple readable flow, reduce potential bugs, etc. AI s not great at that. It’s happy to mix strings, nulls, bad type castings, no separation of concerns, no small understandable functions, no reusable code, etc. which is th part i enjoy thinking about
DrewADesign|21 days ago
kelipso|21 days ago
KronisLV|21 days ago
I sometimes dread writing code that's in a state of bad disrepair or is overly complex, think a lot of the "enterprise" code out there - it got so bad that I more or less quit a job over it, though never really stated that publicly, alongside my mind going dark places when you have pressure to succeed but the circumstances are stacked against you.
For a while I had a few Markdown files that went into detail exactly why I hated it, in addition to also being able to point my finger at a few people responsible for it. I tried approaching it professionally, but it never changed and the suggestions and complaints largely fell on deaf ears. Obviously I've learnt that while you can try to provide suggestions, some people and circumstances will never change, often it's about culture fit.
But yeah, outsource all of that to AI, don't even look back. Your sanity is worth more than that.
lokar|21 days ago
HPsquared|21 days ago
ebhn|21 days ago
falloutx|21 days ago
sho_hn|21 days ago
I very much enjoy the actively of writing code. For me, programming is pure stress relief. I love the focus and the feeling flow, I love figuring out an elegant solution, I love tastefully structuring things based on my experience of what concerns matter, etc.
Despite the AI tools I still do that: I put my effort into the areas of the code that count, or that offer intellectually stimulating challenge, or where I want to make sure to explore manually think my way into the problem space and try out different API or structure ideas.
In parallel to that I keep my background queue of AI agents fed with more menial or less interesting tasks. I take the things I learn in my mental "main thread" into the specs I write for the agents. And when I need to take a break on my mental "main thread" I review their results.
IMHO this is the way to go for us experienced developers who enjoy writing code. Don't stop doing that, there's still a lot of value in it. Write code consciously and actively, participate in the creation. But learn to utilize and keep busy agents in parallel or when you're off-keyboard. Delegate, basically. There's quite a lot of things they can do already that you really don't need to do because the outcome is completely predictable. I feel that it's possible to actually increase the hours/day focussing on stimulating problems that way.
The "you're just mindlessly prompting all day" or "the fun is gone" are choices you don't need to be making.
keybored|21 days ago
There’s talk of war in the state of Nationstan. There are two camps: those who think going to war is good and just, and those who think it is not practical. Clearly not everyone is pro-war. There are two camps. But the Overton Window is defined with the premise that invading another country is a right that Nationstate has and can act on. There are by definition (inside the Overton Window) no one who is anti-war on the principle that the state has no right to do it.[2]
Not all articles in this AI category are outright positive. They range from the euphoric to the slightly depressed. But they share the same premise of inevitability; even the most negative will say that, of course I use AI, I’m not some Luddite[3]! It is integral to my work now. But I don’t just let it run the whole game. I copy–paste with judicious care. blah blah blah
The point of any Overton Window is to simulate lively debate within the confines of the premises.
And it’s impressive how many aspects of “the human” (RIP?) it covers. Emotions, self-esteem, character, identity. We are not[4] marching into irrelevance without a good consoling. Consolation?
[1] https://news.ycombinator.com/item?id=44159648
[2] You can let real nations come to mind here
This was taken from the formerly famous (and controversial among Khmer Rouge obsessed) Chomsky, now living in infamy for obvious reasons.
[3] Many paragraphs could be written about this
[4] We. Well, maybe me and others, not necessarily you. Depending on your view of whether the elites or the Mensa+ engineers will inherit the machines.
unknown|21 days ago
[deleted]
ryan_n|21 days ago
sigmoid10|21 days ago
testemailfordg2|21 days ago
neversupervised|21 days ago
krupan|21 days ago
TZubiri|21 days ago
Trasmatta|21 days ago
But I guess that's nothing new.
angrydev|21 days ago
askonomm|21 days ago
If you "set and forget", then you are vibe coding, and I do not trust for a second that the output is quality, or that you'd even know how that output fits into the larger system. You effectively delegate away the reason you are being paid onto the AI, so why pay you? What are you adding to the mix here? Your prompting skills?
Agentic programming to me is just a more efficient use of the tools I already used anyway, but it's not doing the thinking for me, it's just doing the _doing_ for me.
pdimitar|21 days ago
> What are you adding to the mix here? Your prompting skills?
The answer to that is an unironic and dead-serious "yes!".
My colleagues use Claude Opus and it does an okay job but misses important things occasionally. I've had one 18-hour session with it and fixed 3 serious but subtle and difficult to reproduce bugs. And fixed 6-7 flaky tests and our CI has been 100% green ever since.
Being a skilled operator is an actual billable skill IMO. And that will continue to be the case for a while unless the LLM companies manage to make another big leap.
I've personally witnessed Opus do world-class detective work. I even left it unattended and it churned away on a problem for almost 5h. But I spent an entire hour before that carefully telling it its success criteria, never to delete tests, never to relax requirements X & Y & Z, always to use this exact feedback loop when testing after it iterated on a fix, and a bunch of others.
In that ~5h session Opus fixed another extremely annoying bug and found mistakes in tests and corrected them after correcting the production code first and making new tests.
Opus can be scary good but you must not handwave anything away.
I found love for being an architect ever since I started using the newest generation [of scarily smart-looking] LLMs.
ramesh31|21 days ago
rpodraza|21 days ago
taway1874|21 days ago
unknown|21 days ago
[deleted]
qoez|21 days ago
cat_plus_plus|21 days ago
I also like writing code by hand, I just don't want to maintain other people's code. LMK if you need a job referral to hand refactor 20K lines of code in 2 months. Do you also enjoy working on test coverage?
jmull|21 days ago
True, and you really do need to internalize the context to be a good software developer.
However, just because coding is how you're used to internalizing context doesn't mean it's the only good way to do it.
(I've always had a problem with people jumping into coding when they don't really understand what they are doing. I don't expect LLMs to change that, but the pernicious part of the old way is that the code -- much of it developed in ignorance -- became too entrenched/expensive to change in significant ways. Perhaps that part will change? Hopefully, anyway.)
Snacklive|20 days ago
I can use AI to help me explore libraries or to replace a search, generate small snippets here and there, or even scripts that i occasionally need. But i can't vibecode, i don't know how to let go, i babysit too much, i read the code and i feel uneasy if I don't understand what I'm building, or why I'm building it in a certain way, i need to understand how the pieces work to make a whole
sathish316|21 days ago
The reason Claude code or Cursor feels addictive even if it makes mistakes is better illustrated in this post - https://x.com/cryptocyberia/status/2014380759956471820?s=46
arjie|21 days ago
For me, LLMs are joyful experiences. I think of ideas and they make them happen. Remarkable and enjoyable. I can see how someone who would rather assemble the furniture, or perhaps build it, would like to do that.
I can’t really relate but I can understand it.
nonethewiser|21 days ago
coldtea|21 days ago
It absolutely is.
>Even if I generate a 1,000 line PR in 30 minutes I still need to understand and review it. Since I am responsible for the code I ship, this makes me the bottleneck.
You don't ship it, the AI does. You're just the middleman, a middleman they can eventually remove altogether.
>Now, I would be lying if I said I didn’t use LLMs to generate code. I still use Claude, but I do so in a more controlled manner.
"I can quit if I want"
>Manually giving claude the context forces me to be familiar with the codebase myself, rather than tell it to just “cook”. It turns code generation from a passive action to a deliberate thoughtful action. It also keeps my brain engaged and active, which means I can still enter the flow state. I have found this to be the best of both worlds and a way to preserve my happiness at work.
And then soon the boss demands more output, like the guys who left it all to Claude and even run 5x in parallel give.
Ygg2|21 days ago
It isn't. Coding is to software engineering, what calculation is to math. A necessary but insufficient condition.
> And then soon the boss demands more output, like the guys who left it all to Claude and even run 5x in parallel give.
You can get 100x output for 1/100x the price, if you replace the monthly Claude subscription with a Markov chain. Think of the efficiency gains.
Sure, it will be garbage, but think of the velocity.
boredemployee|21 days ago
I think we should be worrying about more urgent things, like a worker doing the job of three people with ai agents, the mental load that comes with that, how much of the disruption caused by ai will disproportionately benefit owners rather than employees, and so on.
northfield27|21 days ago
And others are not able to believe the (not extreme) but visible speed boost from pragmatic use of AI.
And sadly, whenever the discussion about the collective financial disadvantage of AI to software engineers will start and wherever it goes…
The owners and employers will always make the profits.
xantronix|21 days ago
gitprolinux|20 days ago
meken|21 days ago
For me, LLMs have been a tremendous boon for me in terms of learning.
mooktakim|21 days ago
kmaitreys|21 days ago
It's so ironic because computers/computer programs were literally invented to avoid doing grunt work.
analog8374|21 days ago
Succinctly: process over product.
jama211|20 days ago
raw_anon_1111|21 days ago
I am not responsible for choosing whether the code I write using a for loop or while loop. I am responsible for whether my implementation - code, architecture, user experience - meets the functional and non functional requirements. It’s been well over a decade that my responsibilities didn’t require delegation to other developers doing the work or even outsourcing an entire implementation to another company like a SalesForce implementation.
morshu9001|21 days ago
Now that I have more experience and manage other SWEs, I was right, that stuff was dumb and I'm glad that nobody cares anymore. I'll spend the time reviewing but only the important things.
deaux|21 days ago
In fact, it's even worse - driving a car is one of the least happy modes of getting around there is. And sure, maybe you really enjoy driving one. You're a rare breed when it comes down to it.
Yet it's responsible by far for the most people-distance transported every day.
unknown|21 days ago
[deleted]
gitaarik|21 days ago
So I'll take the horse to work from now on.
sambapa|20 days ago
In other words, it's entirely okay to use a forklift to lift heavy weights, but please go to the gym.
queenkjuul|21 days ago
I almost never agree with the names Claude chooses, i despise the comments it adds every other line despite me telling it over and over and over not to, oftentimes i catch the silly bugs that look fine at first glance when you just let Claude write its output direct to the file.
It feels like a good balance, to me. Nobody on my team is working drastically faster than me, with or without AI. It very obviously slows down my boss (who just doesn't pay attention and has to rework everything twice) or some of the juniors (who don't sufficiently understand the problem to begin with). I'll be more productive then them even if i am hand-writing most of the code. So i don't feel threatened by this idea that "hand written code will be something nobody does professionally here soon" -- like the article said, if I'm responsible for the code i submit, I'm still the bottleneck, AI or not. The time i spend writing my own code is time I'm not poring over AI output trying to verify that it's actually correct, and for now that's a good trade.
geldedus|21 days ago
pickleRick243|21 days ago
fallat|21 days ago
shmerl|21 days ago
Bean counters don't care about creativity and art though, so they'll never get it.
cat_plus_plus|21 days ago
jairojair|21 days ago
angrydev|21 days ago
You could look back throughout human history at the inventions that made labor more efficient and ask the same question. The time-savings could either result in more time to do even more work, or more time to keep projects on pace at a sane and sustainable rate. It's up to us to choose.
CraftingLinks|21 days ago
mekod|21 days ago
unknown|21 days ago
[deleted]
nextlevelwizard|21 days ago
kittbuilds|21 days ago
[deleted]
animanoir|21 days ago
[deleted]
moralestapia|21 days ago
[deleted]