top | item 43662686

The Bitter Prediction

215 points| jannesan | 10 months ago |4zm.org

178 comments

order

kassner|10 months ago

> I've never been more productive

Maybe it’s because my approach is much closer to a Product Engineer than a Software Engineer, but code output is rarely the reason why projects that I worked on are delayed. All my productivity issues can attributed to poor specifications, or problems that someone just threw over the wall. Every time I’m blocked is because someone didn’t make a decision on something, or no one has thought further enough to see this decision was needed.

It irks me so much when I see the managers of adjacent teams pushing for AI coding tools when the only thing the developers know about the project is what was written in the current JIRA ticket.

pards|10 months ago

> code output is rarely the reason why projects that I worked on are delayed

This is very true at large enterprises. The pre-coding tasks [0] and the post-coding tasks [1] account for the majority of elapsed time that it takes for a feature to go from inception to production.

The theory of constraints says that optimizations made to a step that's not the bottleneck will only make the actual bottleneck worse.

AI is no match for a well-established bureaucracy.

[0]: architecture reviews, requirements gathering, story-writing

[1]: infrastructure, multiple phases of testing, ops docs, sign-offs

api|10 months ago

For most software jobs, knowing what to build is harder than building it.

I’m working hard on building something right now that I’ve had several false starts on, mostly because it’s taken years for us to totally get our heads around what to build. Code output isn’t the problem.

CM30|10 months ago

Yeah, something like 95% of project issues are management and planning issues, not programming or tech ones. So often projects start out without anyone on the team researching the original problem or what their users would actually need, then hastily rejigging the whole thing to fix that midway through development.

inerte|10 months ago

aka https://en.wikipedia.org/wiki/No_Silver_Bullet

And it's also interesting to think that PMs are also using AI - in my company for example we allow users to submit feedback, then there's an AI summary report sent to PMs. Which them put the report into ChatGPT with the organizational goals and the key players and previous meeting transcripts, and then they ask the AI to weave everything together into a PRD, or even a 10 slide presentation.

doug_durham|10 months ago

I agree with you that traditionally that is the bottleneck. Think about why poor specifications are a problem. It's a problem because software is so costly and time consuming to create. Many times the stakeholders don't know that something isn't right until they can actually use it. What if it takes 50% less time to create code? Code becomes less precious. Throwing away failed ideas isn't as big an issue. Of course it is trivially easy to think of cases where this could also lead to never shipping your code.

d0liver|10 months ago

I feel this. As a dev, most of my time is spent thinking and asking questions.

hedgew|10 months ago

>Why bother playing when I knew there was an easier way to win? This is the exact same feeling I’m left with after a few days of using Claude Code. I don’t enjoy using the tool as much as I enjoy writing code.

My experience has been the opposite. I've enjoyed working on hobby projects more than ever, because so many of the boring and often blocking aspects of programming are sped up. You get to focus more on higher level choices and overall design and code quality, rather than searching specific usages of libraries or applying other minutiae. Learning is accelerated and the loop of making choices and seeing code generated for them, is a bit addictive.

I'm mostly worried that it might not take long for me to be a hindrance in the loop more than anything. For now I still have better overall design sense than AI, but it's already much better than I am at producing code for many common tasks. If AI develops more overall insight and sense, and the ability to handle larger code bases, it's not hard to imagine a world where I no longer even look at or know what code is written.

siffin|10 months ago

Everyone has different objective and subjective experiences, and I suspect some form of selection will promote those who more often feel excited and relieved by using AI than those who feel it more often a negative, like it challenges some core aspect of self.

It might challenge us, and maybe those of us who feel challenged in that way need to rise to it, for there are always harder problems to solve

If this new tool seems to make things so easy it's like "cheating", then make the game harder. Can't cheat reality.

palata|10 months ago

The calculator made it less important to be relatively good with arithmetic. Many people just cannot add or subtract two numbers without one. And it feels like they lose intuition, somehow: if numbers don't "speak" to you at all, can you ever realize that 17 is roughly a third of 50? The only way you realise it with a calculator is if you actually look for it. Whereas if you can count, it just appears to you.

Similar with GPS and navigation. When you read a map, you learn how to localise yourself based on landmarks you see. You tend to get an understanding of where you are, where you want to go and how to go there. But if you follow the navigation system that tells you "turn right", "continue straight", "turn right", then again you lose intuition. I have seen people following their navigation system around two blocks to finally end up right next to where they started. The navigation system was inefficient, and with some intuition they could have said "oh actually it's right behind us, this navigation is bad".

Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase. Or that instead of writing a complex task in your codebase, you could contribute a patch to a dependency and it would make it much simpler (e.g. because the dependency already has this logic internally and you could just expose it instead of rewriting it). But it requires an understanding of those dependencies: do you have access to their code in the first place (either because they are open source or belong to your company)?

Those AIs obviously help writing code. But do they help getting an understanding of the codebase to the point where you build intuition that can be leveraged to improve the project? Not sure.

Is it necessary, though? I don't think so: the tendency is that software becomes more and more profitable by becoming worse and worse. AI may just help writing more profitable worse code, but faster. If we can screw the consumers faster and get more money from them, that's a win, I guess.

nthingtohide|10 months ago

> Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase.

I understand the point you are making. But what makes you think refactoring won't be AI's forte. Maybe you could explicitly ask for it. Maybe you could ask it to minify while being human-understandable and that will achieve the refactoring objectives you have in mind.

vertnerd|10 months ago

I'm a little older now, over 60. I'm writing a spaceflight simulator for fun and (possible) profit. From game assets to coding, it seems like AI could help. But every time I try it out, I just end up feeling drained by the process of guiding it to good outcomes. It's like I have an assistant to work for me, who gets to have all the fun, but needs constant hand holding and guidance. It isn't fun at all, and for me, coding and designing a system architecture is tremendously satisfying.

I also have a large collection of handwritten family letters going back over 100 years. I've scanned many of them, but I want to transcribe them to text. The job is daunting, so I ran them through some GPT apps for handwriting recognition. GPT did an astonishing job and at first blush, I thought the problem was solved. But on deeper inspection I found that while the transcriptions sounded reasonable and accurate, significant portions were hallucinated or missing. Ok, I said, I just have to review each transcription for accuracy. Well, reading two documents side by side while looking for errors is much more draining than just reading the original letter and typing it in. I'm a very fast typist and the process doesn't take long. Plus, I get to read every letter from beginning to end while I'm working. It's fun.

So after several years of periodically experimenting with the latest LLM tools, I still haven't found a use for them in my personal life and hobbies. I'm not sure what the future world of engineering and art will look like, but I suspect it will be very different.

My wife spins wool to make yarn, then knits it into clothing. She doesn't worry much about how the clothing is styled because it's the physical process of working intimately with her hands and the raw materials that she finds satisfying. She is staying close to the fundamental process of building clothing. Now that there are machines for manufacturing fibers, fabrics and garments, her skill isn't required, but our society has grown dependent on the machines and the infrastructure needed to keep them operating. We would be helpless and naked if those were lost.

Likewise, with LLM coding, developers will no longer develop the skills needed to design or "architect" complex information processing systems, just as no one bothers to learn assembly language anymore. But those are things that someone or something must still know about. Relegating that essential role to a LLM seems like a risky move for the future of our technological civilization.

palata|10 months ago

I can relate to that.

Personally, right now I find it difficult to imagine saying "I made this" if I got an AI to generate all the code of a project. If I go to a bookstore, ask for some kind of book ("I want it to be with a hard cover, and talk about X, and be written in language Y, ..."), I don't think that at the end I will feel like I "made the book". I merely chose it, someone else made it (actually it's multiple jobs, between whoever wrote it and whoever actually printed and distributed it).

Now if I can describe a program to an AI and it results in a functioning program, can I say that I made it?

Of course it's more efficient to use knitting machines, but if I actually knit a piece of clothing, then I can say I made it. And that's what I like: I like to make things.

thwarted|10 months ago

Editing and proofreading, of code and prose, are work themselves, which is often not appreciated enough to be recognized as work, and I think this is the basis for the perspective that if you can get the LLMs to do the coding/writing and all you need to do is just proof the result as if that's somehow easier because proofing is not the real work.

musicale|10 months ago

> reading two documents side by side while looking for errors is much more draining than just reading the original letter and typing it in

Validating LLM-generated text seems to be a hard problem, because it requires a human-quality reader.

OgsyedIE|10 months ago

I think this particular anxiety was explored rather well in the anonymous short story 'The End of Creative Scarcity':

https://www.fictionpress.com/s/3353977/1/The-End-of-Creative...

Some existential objections occur; how sure are we that there isn't an infinite regress of ever deeper games to explore? Can we claim that every game has an enjoyment-nullifying hack yet to discover with no exceptions? If pampered pet animals don't appear to experience the boredom we anticipate is coming for us, is the expectation completely wrong?

nemo1618|10 months ago

Thank you for sharing this :)

bogrollben|10 months ago

This was great - thank you!

zem|10 months ago

thanks, that was wonderful

xg15|10 months ago

As far as hobby projects are concerned, I'd agree: A bit more "thinking like your boss" could be helpful. You can now focus more on the things you want your project be able to do instead of the specific details of its code structure. (In the end, nothing keeps you from still manually writing/editing parts of the code if you want some things specifically done in a certain way. There are also projects where the code structure legitimately is the feature, I.e. if you want to explore some new style of API or architecture design for its own sake)

The one part that I believe will still be essential is understanding the code. It's one thing to use Claude as a (self-driving) car, where you delegate the actual driving but still understand the roads being taken. (Both for learning and for validating that the route is in fact correct)

It's another thing to treat it like a teleporter, where you tell it a destination and then are magically beamed to a location that sort of looks like that destination, with no way to understand how you got there or if this is really the right place.

mjburgess|10 months ago

All articles of this class, whether positive or negative, begin "I was working on a hobby project" or some variation thereof.

The purpose of hobbies is to be a hobby, archetypical tech projects are about self-mastery. You cannot improve your mastery with a "tool" that robs you of most of the minor and major creative and technical decisions of the task. Building IKEA furniture will not make you a better carpenter.

Why be a better carpenter? Because software engineering is not about hobby projects. It's about research and development at the fringes of a business (, orgs, projects...) requirements -- to evolve their software towards solving them.

Carpentry ("programming craft") will always (modulo 100+ years) be essential here. Powertools do not reduce the essential craft, they increase the "time to craft being required" -- they mean we run into walls of required expertise faster.

AI as applied to non-hobby projects -- R&D programming in the large -- where requirements aren't specified already as prior art programs (of func & non-func variety, etc.) ---- just accelerates the time to hitting the wall where you're going to shoot yourself in the foot if you're not an expert.

I have not seen a single take by an experienced software engineer have a "sky is falling" take, ie., those operating at typical "in the large" programming scales, in typical R&D projects (revision to legacy, or greenfield -- just reqs are new).

mnky9800n|10 months ago

I think it also misses the way you can automate non-trivial tasks. For example, I am working on a project where there is tens of thousands of different data sets each with their own meta data and structure but the underlying data is mostly the same. But because the meta data and structure are all different, it’s really impossible to combine all this data into one big data set without a team of engineers going through each data set and meticulously restructuring and conforming said metadata to a new monolithic schema. However I don’t have any money to hire that team of engineers. But I can massage LLMs to do that work for me. These are ideal tasks for AI type algorithms to solve. It makes me quite excited for the future as many of these kind of tasks could be given to ai agents that would otherwise be impossible to do yourself.

skerit|10 months ago

I've used Claude-Code & Roo-Code plenty of times with my hobby projects.

I understand what the article means, but sometimes I've got the broad scopes of a feature in my head, and I just want it to work. Sometimes programming isn't like "solving a puzzle", sometimes it's just a huge grind. And if I can let an LLM do it 10 times faster, I'm quite happy with that.

I've always had to fix up the code one way or another though. And most of the times, the code is quite bad (even from Claude Sonnet 3.7 or Gemini Pro 2.5), but it _did_ point me in the right direction.

About the cost: I'm only using Gemini Pro 2.5 Experimental the past few weeks. I get to retry things so many times for free, it's great. But if I had to actually pay for all the millions upon millions of used tokens, it would have cost me *a lot* of money, and I don't want to pay that. (Though I think token usage can be improved a lot, tools like Roo-Code seem very wasteful on that front)

fhd2|10 months ago

> I have not seen a single take by an experienced software engineer have a "sky is falling" take,

Let me save everybody some time:

1. They're not saying it because they don't want to think of themselves as obsolete.

2. You're not using AI right, programmers who do will take your job.

3. What model/version/prompt did you use? Works For Me.

But seriously: It does not matter _that_ much what experienced engineers think. If the end result looks good enough for laymen and there's no short term negative outcomes, the most idiotic things can build up steam for a long time. There is usually an inevitable correction, but it can take decades. I personally accept that, the world is a bit mad sometimes, but we deal with it.

My personal opinion is pretty chill: I don't know if what I can do will still be needed n years from now. It might be that I need to change my approach, learn something new, or whatever. But I don't spend all that much time worrying about what was, or what will be. I have problems to solve right now, and I solve them with the best options available to me right now.

People spending their days solving problems probably generally don't have much time to create science fiction.

davidanekstein|10 months ago

I think AI is posing a challenge to people like the person in TFA because programming is their hobby and one that they’re good at. They aren’t used to knowing someone or something can do it better and knowing that now makes them wonder what the point is. I argue that amateur artists and musicians have dealt with this feeling of “someone can always do it better” for a very long time. You can have fun while knowing someone else can make it better than you, faster, without as much struggle. Programmers aren’t as used to this feeling because, even though we know people like John Carmack exist, it doesn’t fly in your face quite like a beautiful live performace or painted masterpiece does. Learning to enjoy your own process is what I think is key to continuing what you love. Or, use it as an opportunity to try something else — but you’ll eventually discover the same thing no matter what you do. It’s very rare to be the best at something.

palata|10 months ago

> can make it better than you, faster, without as much struggle

Still need to prove that AI-generated code is "better", though.

"More profitable", in a world where software generally becomes worse (for the consumers) and more profitable (for the companies), sure.

dbalatero|10 months ago

I'm both relatively experienced as a musician and software engineer so I kinda see both sides. If musicians want to get better, they have to go to the practice room and work. There's a satisfaction to doing this work and coming out the other side with that hard-won growth.

Prior to AI, this was also true with software engineering. Now, at least for the time being, programmers can increase productivity and output, which seems good on the surface. However, with AI, one trades the hard work and brain cells created by actively practicing and struggling with craft for this productivity gain. In the long run, is this worth it?

To me, this is the bummer.

exfalso|10 months ago

I'm more and more confident I must be doing something wrong. I (re)tried using Claude about a month ago and I simply stopped using it after about two weeks because on one hand productivity did not increase(perhaps even decreased), but on the other hand it made me angry because of the time wasted on its mistakes. I was also mostly using it on Rust code, so I'm even more surprised about the article. What am I doing wrong? I've been mostly using the chat functionality and auto-complete, is there some kind of secret feature I'm missing?

creata|10 months ago

I'd love to watch a video of someone using these tools well, because I am not getting much out of it. They save some time, sometimes, but they're nowhere near the 5x boost that some people claim.

whiplash451|10 months ago

The thing is: the industry does not need people who are good at (or enjoy) programming, it needs people who are good at (and enjoy) generating value for customers through code.

So the OP was in a bad place without Claude anyways (in industry at least).

This realization is the true bitter one for many engineers.

blackbear_|10 months ago

Productivity at work is well correlated with enjoyment of work, so the industry better look for people who enjoy programming.

The realization that productive workers aren't just replaceable cogs in the machine is also a bitter lesson for businessmen.

xg15|10 months ago

> generating value for customers through code.

Generating value for the shareholders and/or investors, not the customers. I suspect this is the next bitter lesson for developers.

constantcrying|10 months ago

Writing software will never again be a skill worth 100k a year.

I am sure Software developers are here to stay, but nobody who just writes software is worth anywhere close to 100k a year. Either AI or outsourcing is making sure of that.

jannesan|10 months ago

That’s a good point. I do think there still is some space to focus on just the coding as an engineer, but with AI the space is getting smaller.

xg15|10 months ago

A question that came up in discussions recently and that I found interesting: How will new APIs, libraries or tooling be introduced in the future?

The models all have their specific innate knowledge of the programming ecosystem from the point in time where their last training data was collected. However, unlike humans, they cannot update that knowledge unless a new finetuning is performed - and even then, they can only learn about new libraries that are already in widespread use.

So if everyone now shifts to Vibe Coding, will this now mean that software ecosystems effectively become frozen? New libraries cannot gain popularity because AIs won't use them in code and AIs won't start to use them because they aren't popular.

benoau|10 months ago

I guess the counter-question is does it matter if nobody is building tools optimized for humans, when humans aren't being paid to write software?

I saw a submission earlier today that really illustrated perfectly why AI is eating people who write code:

> You could spend a day debating your architecture: slices, layers, shapes, vegetables, or smalltalk. You could spend several days eliminating the biggest risks by building proofs-of-concept to eliminate unknowns. You could spend a week figuring out how you’ll store, search, and cache data and which third–party integrations you’ll need.

$5k/person/week to have an informed opinion of how to store your data! AI going to look at the billion times we already asked these questions and make an instant decision and the really, really important part is it doesn't really matter what we choose anyway because there are dozens of right answers.

mckn1ght|10 months ago

There will still be people who care to go deeper and learn what an API is and how to design a good one. They will be able to build the services and clients faster and go deeper using AI code assistants.

And then, yes, you’ll have the legions of vibe coders living in Plato’s cave and churning out tinker toys.

mike_hearn|10 months ago

It's not an issue. Claude routinely uses internal APIs and frameworks on one of my projects that aren't public. The context windows are big enough now that it can learn from a mix of summarized docs and surrounding examples and get it nearly right, nearly all the time.

There is an interesting aspect to this whereby there's maybe more incentive to open source stuff now just to get usage examples in the training set. But if context windows keep expanding it may also just not matter.

The trick is to have good docs. If you don't then step one is to work with the model to write some. It can then write its own summaries based on what it found 'surprising' and those can be loaded into the context when needed.

c7b|10 months ago

Not sure this is going to be a big issue practice. Tools like ChatGPT regularly get new knowledge cutoffs and those seem to work well in my experience. I haven't tested it with programming features specifically, but you could simply do a small experiment: take the tool of your choice and a programming feature that was introduced after it first launched and see whether you can get it to use it correctly.

fragmede|10 months ago

> unless a new finetuning is performed

That's where we're at. The LLM needs to be told about the brand new API by feeding it new docs, which just uses up tokens in its context window.

zkmon|10 months ago

It's not true that coding would no longer be fun because of AI. Arithmetic did not stop being fun because of calculators. Travel did not stop being fun because of cars and planes. Life did not stop being fun because of lack of old challenges.

New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.

fire_lake|10 months ago

> For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.

Well this sounds delightful! Glad to be free of the thinking and creativity!

wizzwizz4|10 months ago

I find legacy systems fun because you're looking at an artefact built over the years by people. I can get a lot of insight into how a system's design and requirements changed over time, by studying legacy code. All of that will be lost, drowned in machine-generated slop, if next decade's legacy code comes out the backside of a language model.

keybored|10 months ago

> New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.

You’re gonna work on captcha puzzles and you’re gonna like it.

IshKebab|10 months ago

> Not only that, the generated code was high-quality, efficient, and conformed to my coding guidelines. It routinely "checked its work" by running unit tests to eliminate hallucinations and bugs.

This seems completely out of whack with my experience of AI coding. I'm definitely in the "it's extremely useful" camp but there's no way I would describe its code as high quality and efficient. It can do simple tasks but it often gets things just completely wrong, or takes a noob-level approach (e.g. O(N) instead of O(1)).

Is there some trick to this that I don't know? Because personally I would love it if AI could do some of the grunt work for me. I do enjoy programming but not all programming.

joelthelion|10 months ago

Which model and tool are you using? There's a whole spectrum of AI-assisted coding.

frognumber|10 months ago

I may be old, but I had the same feeling for low-level code. I enjoyed doing things like optimizing a low-level loop in C or assembly, bootstrapping a microcontroller, or writing code for a processor which didn't have a compiler yet. Even in BASIC, I enjoyed PEEKing and POKE'ing. I enjoyed opening up a file system in a binary editor. I enjoyed optimizing how my computer draws a line.

All this went away. I felt a loss of joy and nostalgia for it. It was bitter.

Not bad, but bitter.

whiplash451|10 months ago

The author is doing the math the wrong way. For an extra $5/day, a 3rd world country can now pay an engineer $20/day to do the job of a junior engineer in a 1st world one.

The bitter lesson is going to be for junior engineers who see less job offers and don’t see consulting power houses eat their lunch.

inerte|10 months ago

Yes, my thoughts at the end of the article. If the AI coding is really good (or will be really, really good) you could give 6 figures salary + $5/d in OpenAI credits to a Bay Area developer, OR you give $5/d salary + $5/d in OpenAI credits to someone else from another country.

That's what happened to manufacturing after all.

palata|10 months ago

I tend to think about the average code review: who actually catches tricky bugs? Who actually takes the time to fully understand the code they review? And who likes it? My feeling is that reviews are generally a "skimming through the code and checking that it looks ok from a distance".

At least we have one person who understands it in details: the one who wrote it.

But with AI-generated code, it feels like nobody writes it anymore: everybody reviews. Not only we don't like to review, but we don't do it well. And if you want to review it thoroughly, you may as well write it. Many open source maintainers will tell you that many times, it's faster for them to write the code than to review a PR from a stranger they don't trust.

M4v3R|10 months ago

To me it’s the exact opposite. I was writing code for the past 20+ years and I recently realized it’s not the act of writing code I love, but the act of creating something from nothing. Over the past few months I wrote two non-trivial utility apps that otherwise I would most probably not write because I didn’t have enough time to do that, but Cursor + Claude gave me the 5x productivity boost that enabled me to do so, and I really enjoyed doing that.

My only gripe is that the models are still pretty slow, and that discourages iteration and experimentation. I can’t wait for the day a Claude 3.5 grade model with 1000 tok/s speed releases, this will be a total game changer for me. Gemini 2.5 recently came closer, but it’s still not there.

float4|10 months ago

For me it's a bit of both. I'm working on exciting energy software with people who have deep knowledge of the sector but only semi-decent software knowledge. Nearly every day I'm reviewing some shitty PR comprised of awful, ugly code that somehow mostly works.

The product itself is exciting and solves a very real problem, and we have many customers who want to use it and pay for it. But damn, it hurts my soul knowing what goes on under the hood.

nu11ptr|10 months ago

I've kinda hit the same place. I thought I loved writing code, but I so often start projects and don't finish once the excitement of writing all the code wears off. I'm realizing it is designing and architecting that I love, and seeing that get built, not writing every line of code. I also am enjoying AI as my velocity has solidly improved.

Another area I find very helpful is when I need to use the same technique in my code as someone from another language. No longer do I need to spend hours figuring out how they did it. I just ask an AI and have them explain it to me and then often simply translate the code.

hsuduebc2|10 months ago

Same here. I do not usually enjoy programming as an craft but the act of building something is what is loveable experience.

AndrewKemendo|10 months ago

I had a conversation with a fellow tech founder (Running a $Bn+ val Series D robotics company currently) recently on AI assisted coding tools.

We have both been using or integrating AI code support tools since they became available and both writing code (usually Python) for 20+ years.

We both agree that windsurf + claude is our default IDE/Env now on. We also agree that for all future projects we think we can likely cut the number of engineers needed by 1/3rd.

Based on what I’ve been using for the last year professionally (copilot) and on the side, I’m confident I could build faster, better and with less effort with 5 engineers and AI tools as with 10 or 15. Also communication overhead reduces by 3x which prevents slowdowns.

So if I have a HA 5 layer stack application (fe, be, analytics, train/inference, networking/data mgt) with IPCs between them, instead of one senior and two juniors per process for a total of 15 people, I only need the 5 mid-seniors now.

cardanome|10 months ago

A relative known youtuber called the primeagen has recently done a challenge sponsored by Cursor themselves where he and some friends would "vibe code" a game in a week. The results were pretty underwhelming. They would have been much faster not using generative Ai.

Compared what you see from game jams where sometimes solo devs create whole games in just a few days it was pretty trash.

It also tracks with my own experience. Yes, cursor quickly helps me get the first 80% done but then I spent so much time cleaning after it that I have barely saved any time in total.

For personal projects where you don't care about code quality I can see it as a great tool. If you actual have professional standards, no. (Except maybe for unit tests, I hate writing those by hand.)

Most of the current limitation CAN be solved by throwing even more compute at it. Absolutely. The question is will it economically make sense? Maybe if fusion becomes viable some day but currently with the end of fossil fuels and climate change? Is generative Ai worth destroying our planet for?

At some point the energy consumption of generative AI might get so high and expensive that you might be better off just letting humans do the work.

sigmoid10|10 months ago

I feel most people drastically underestimate game dev. The programming aspect is only one tiny part of it and even there it goes so wide (from in-game logic to rendering to physics) that it's near impossible for people who are not really deep into it to have a clue what is happening. And even if you manage to vibe-code your way through it, your game will still suck unless you have good assets - which means textures, models, animations, sounds, FX... you get it. Developing a high quality game is sort of the ultimate test for AI and if it achieves it on a scale beyond game jams we might as well accept that we have reached artificial superintelligence.

dinfinity|10 months ago

To be fair, the whole "vibe coding" thing is really really new stuff. It will undoubtedly take some time to optimize how to actually effectively do it.

Recently, we've seen a lot of a shift in insight into not just diving straight into implementation, but actually spending time on careful specification, discussion and documentation either with or without an AI assistant before setting it loose to implement stuff.

For large, existing codebases, I sincerely believe that the biggest improvements lie in using MCP and proper instructions to connect the AI assistants to spec and documentation. For new projects I would put pretty much all of that directly into the repos.

nyarlathotep_|10 months ago

> A relative known youtuber called the primeagen has recently done a challenge sponsored by Cursor themselves where he and some friends would "vibe code" a game in a week. The results were pretty underwhelming. They would have been much faster not using generative Ai.

I ended up watching maybe 10 minutes of these streams on two separate occasions, and he was writing code manually 90% of the time on both occasions, or yelling at LLM output.

jstummbillig|10 months ago

I don't really see it. At least the article should address why we would not assume massive price drops, market adjusted pricing and free offerings, as with all other innovation before, that all lead to wider access to better technology.

Why would this be the exception?

ignoramous|10 months ago

If that happens, I can see those programmers become their age's Uber drivers (low pay, low skill, unsatisfactory, gig workforce).

DeathArrow|10 months ago

>Why bother playing when I knew there was an easier way to win?

>This is the exact same feeling I’m left with after a few days of using Claude Code.

For me what matters is the end result, not the mere act of writing code. What I enjoy is solving problems and building stuff. Writing code is a part.

I would gladly use a tool to speed up that part.

But from my testing, unless the task is very simple and trivial, using AI isn't always a walk in the park, simple and efficient.

weinzierl|10 months ago

"But I predict software development will be a lot less fun in the years to come, and that is a very bitter prediction in deed."

Most professional software development hasn't been fun for years, mostly because of all the required ceremony around it. But it doesn't matter, for your hobby projects you can do what you want and it's up to you how much you let AI change that.

coolThingsFirst|10 months ago

Still think amazement of ai tools as harsh as it sounds signals incompetence of the user. They are useful don’t get me wrong but just today Claude wrote code that literally wouldnt run.

Thought it’s ok to use new for object literal in JS.

gadilif|10 months ago

I can really relate to the feeling described after modifying save files to get more resources in a game, but I wonder if it's the same kind of 'cheating'. Doing better in a game has its own associsted feeling of achievement, and cheating definitely robs you of that, which to me explains why playing will be less fun. Moving faster on a side project or at work doesn't feel like the same kind of shortcut/cheat. Most of us no longer program in assembly language, and we still maintain a sense of achievement using elite languages, which naturally abstract away a lot of the details. Isn't using AI to hide away implementation details just a natural next step, where instead of lengthy error prone machine level code, you have a few modern language instructions?

lloeki|10 months ago

> Moving faster on a side project or at work doesn't feel like the same kind of shortcut/cheat.

Depends whether you're in it for the endgame or the journey.

For some the latter is a means to the former, and for others it's the other way around.

jwblackwell|10 months ago

The author is essentially arguing that fewer people will be able to build software in the future.

That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.

walleeee|10 months ago

> The author is essentially arguing that fewer people will be able to build software in the future.

Setting aside the fact that the author nowhere says this, it may in fact be plausible.

> That's the opposite of what's happened over the past year or two. Now many more non-technical people can (and are) building software.

Meanwhile half[0] the students supposed to be learning to build software in university will fail to learn something important because they asked Claude instead of thinking about it. (Or all the students using llms will fail to learn something half the time, etc.)

[0]: https://www.anthropic.com/news/anthropic-education-report-ho...

> That said, nearly half (~47%) of student-AI conversations were Direct—that is, seeking answers or content with minimal engagement.

wobfan|10 months ago

No, he never states this and is not true.

The author tell his experience regarding his joy programming things and figuring stuff out. In the end he says that AI made him lose this joy, and he compares it to cheating in a game. He does not say one word about societal impact and or the amount of engineers in the future, it's what you interpreted yourself.

freb3n|10 months ago

The financial barrier point is really great.

I feel the same with a lot of points made here, but hadn't yet thought about the financial one.

When I started out with web development that was one of the things I really loved. Anyone can just read about html, css and Javascript and get started with any kind of free to use code editor.

Though you can still do just that, it seems like you would always drag behind the 'cool guys' using AI.

M4v3R|10 months ago

You still don’t need AI to write software, but investing in it will make you more productive. More money enables you to buy better tools, that was always true for any trade. My friend is a woodworker and his tools are 5-10x more expensive than what I have in my shack, but are also more precise, more reliable and easier to use. AI is the same, I would even argue it gives you a bigger productivity boost with less money (especially given that local models are getting better literally every week).

qingcharles|10 months ago

These platforms all feel like they are being massively subsidized right now. I'm hoping that continues and they just burn investor cash in a race to the bottom.

pornel|10 months ago

AI will be cheap to run.

The hardware for AI is getting cheaper and more efficient, and the models are getting less wasteful too.

Just a few years ago GPT-3.5 used to be a secret sauce running on the most expensive GPU racks, and now models beating it are available with open weights and run on high end consumer hardware. Few iterations down the line good-enough models will run on average hardware.

When that Xcom game came out, filmmaking, 3D graphics, and machine learning required super expensive hardware out of reach of most people. Now you can find objectively better hardware literally in the trash.

cardanome|10 months ago

I wouldn't be so optimistic.

Moore's law is withering away due to physical limitations. Energy prices go up because of the end of fossil fuels and rising climate change costs. Furthermore the global supply chain is under attack by rising geopolitical tension.

Depending on US tariffs and how the Taiwan situation plays out and many other risks, it might be that compute will get MORE expensive in the future.

While there is room for optimization on the generative AI front we are still have not even reached the point were generative AI is actually good at programming. We have promising toys but for real productivity we need orders of magnitude bigger models. Just look how ChatGPT 4.5 is barely economically viable already with its price per token.

Sure if humanity survives long enough to widely employ fusion energy, it might become practical and cheap again but that will be a long and rocky road.

HarHarVeryFunny|10 months ago

Coding itself can be fun, perhaps especially when one is trying to optimize in some way (faster, less memory usage, more minimal, etc), but at least for me (been S/W eng for 45+ years) I think the real satisfaction is conquering the complexity and challenges of the project, and ultimately the ability to dream about something and conjure it up to become a reality. Maybe coding itself was more fun back in the day of 8-bit micros where everything was a challenge (not enough speed or memory), but nowadays typically that is not the case - it's more about the complexity of what is being built (unless it's some boilerplate CRUD app where there is no fun or challenge at all).

With today's AI, driven by code examples it was trained on, it seems more likely to be able to do a good job of optimization in many cases than to have gleaned the principles of conquering complexity, writing bug-free code that is easy and flexible to modify, etc. To be able to learn these "journeyman skills" an LLM would need to either have access to a large number of LARGE projects (not just Stack Overflow snippets) and/or the thought processes (typically not written down) of why certain design decisions were made for a given project.

So, at least for time being, as a developer wielding AI as a tool, I think we can still have the satisfaction of the higher level design (which may be unwise to leave to the AI, until it is better able to reason and learn), while leaving the drudgework (& a little bit of the fun) of coding to the tool. In any case we can still have the satisfaction of dreaming something up and making it real.

JKCalhoun|10 months ago

> In some countries, more than 90% of the population lives on less than $5 per day. If agentic AI code generation becomes the most effective way to write high-quality code, this will create a massive barrier to entry … Don't even get me started on the green house gas emissions of data centers...

My (naive?) assumption is that all of this will come down: the price (eventually free) and the energy costs.

Then again, may daughters know I am Pollyanna (someone has to be).

gitfan86|10 months ago

I'm not following the logic here. There are tons of free tier AI products available. That makes the world more fair for people in very poor countries not less.

ben_w|10 months ago

Lots of models are free, and useful even, but the best ones are not.

I'm not sure how much RAM is on the average smartphone owned by someone earning $5/day*, but it's absolutely not going to be the half a terabyte needed for the larger models whose weights you can just download.

It will change, but I don't know how fast.

* I kinda expect that to be around the threshold where they will actually have a smartphone, even though the number of smartphones in the world is greater than the number of people

anovikov|10 months ago

I can't see why it's a bitter prediction. It's an observation from all my life that boring, mind-numbing but high impact work makes the best money. Now smart people go into coding because it's a thrill, they enjoy doing it for the sake of it. Once this is no longer the case, these people will be out, and competition will become lower and there will be easier bucks to make.

oliviergg|10 months ago

For me, it’s the opposite, I had somewhat lost my love for my job as a developer between two JavaScript framework wars or wars between craftsmanship and agile. I think we now have the opportunity to return to addressing actual needs. For me, that has always been the driving force, an idea becomes a product. These agents have rekindled my desire to create things.

DeathArrow|10 months ago

>Will programming eventually be relegated to a hobby?

I don't regard programming as merely the act of outputing code. Planning, architecting, having a high level overview, keeping the objective in focus also matters.

Even if we regard programming as just writing code, we have to ask ourselves why we do it.

We plant cereals to be able to eat. At first we used some primitive stone tools to dig the fields. Then we used bronze tools, then iron tools. Then we employed horses to plough the fields more efficiently. Then we used tractors.

Our goal was to eat, not to plough the fields.

Many objects are mass produced now while they were the craft of the artisans centuries ago. We still have craftsmen who enjoy doing things by hand and whose products command a big premium over mass market products.

I don't have an issue if most of the code will be written by AI tools, provided that code is efficient and does exactly what we need. We will still have to manage and verify those tools, and to do that we will still have to understand the whole stack from the very bottom - digital gates and circuits to the highest abstractions.

AI is just another tool in the toolbox. Some carpenters like to use very simple hand tools while other swear by the most modern ones like CNC.

jannesan|10 months ago

this article precisely captures what i have been thinking recently. it’s really demotivating me.

ben_w|10 months ago

Sounds about right, but consider also that music, painting, sculpture, theatre are all simultaneously (1) hobbies requiring great skill to master and which people dervive much joy from, and (2) are experiences that can be bought for a pittance as a download, a "print your own {thing}" shop, 3D printing etc., or YouTube.

The bathwater of economics will surely dirty, but you don't need to throw out the baby of hobbies with it.

skybrian|10 months ago

To put the cost into context, spending $5 a day on tools is ludicrously cheap compared to paying minimum wage, let alone a programmer’s salary. Programming is only free if you already know how to code and don’t value your time.

Many of us do write code for fun, but that results in a skewed perspective where we don’t realize how inaccessible it is for most people. Programmers are providers of expensive professional services and only businesses that spread the costs over many customers can afford us.

So if anything, these new tools will make some kinds of bespoke software development more accessible to people who couldn’t afford professional help before.

Although, most people don’t need to write new code at all. Using either free software or buying off-the-shelf software (such as from an app store) works fine for most people in most situations. Personal, customized software is a niche.

aeonik|10 months ago

Software could be much, much cheaper if libraries were easier to use, and data formats and protocols were more open.

So much code I have written and worked with is either CRUD or compatibility layers for un/under-documented formats.

It's as of most of the industry are plumbers, but we are mining and fabricating the materials for the pipes, and digging trenches to and from every residence using completely different pipes and designs for every. single. connection.

gtirloni|10 months ago

Sure, we can throw code over the wall faster. Is that all that matters though? Just like in poetry, prose, images, etc, AI generates average or worse code. Sure, it may do the job and if your goal is to be average, fine, you should be worried. But has anyone with deep knowledge in programming and a desire to excel actually looked at AI-generated code and thought "omg, this is a work of art. it's so perfect and maintenance will be much easier than anything I could have done! plus, it matches all the requirements from the stakeholders"?

Don't get me wrong, it lets me be more productive sometimes but people that think the days of humans programming computers are numbered have a very rosy (and naive) view of the software engineering world, in my opinion.

constantcrying|10 months ago

>I just missed writing code.

Even before AI really took of that was an experience many developers, including me, had. Outsourcing has taken over much of the industry. If you work in the west, there is a good probability that a large part of your work is managing remote teams, often in India or other low cost countries.

What AI could change is either reducing the value of outsourcing or make software development so accessible that managing the outsourcing becomes unnecessary.

Either way, I do believe that Software Developers are here to stay. They won't be writing much code in any case. A software developer in the US costs 100k a year and writing software simply will never again be worth 100k year. There are people and programs who are much cheaper.

gwern|10 months ago

> Forty-six percent of the global population lives on less than $5 per day. In some countries, more than 90% of the population lives on less than $5 per day. If agentic AI code generation becomes the most effective way to write high-quality code, this will create a massive barrier to entry. Access to technology is already a major class and inequality problem. My bitter prediction is that these expensive frontier models will become as indispensable for software development as they are inaccessible to most of the world’s population.

Forty-six percent of the global population has never hired a human programmer either because a good human programmer costs more than $5 a day{{citation needed}}.

fragmede|10 months ago

How much of the global population has hired another person to do something for them directly? If I go to the store and the cashier does the transaction, I haven't hired a human. so more broadly, do most people hire other humans for jobs? that seems like a rich person thing to me in the first place.

broken-kebab|10 months ago

It's normal flow of things in the industry, isn't it? It used to be an important skill for a programmer to optimize constantly. Tasks like "We need to cut halfkilobyte at least!" were challenging, and satisfying puzzles. And today you open a news webpage, it takes 1.5Gib and who cares? Typing speed used to be an important skill too, and nowadays one can be a decent software developer using two fingers. Memorizing names, and parameters used to be extremely important until autocomplete, and autosuggest appeared. I can expand this list to a hundred points probably.

Kiro|10 months ago

AI has made me love programming again. I can finally focus on the creative parts only.

falcor84|10 months ago

I'm possibly doing it wrong, but that hasn't quite been my experience. While with vibe coding I do still get to express my creativity, my biggest role in this creative partnership still seems to be copy and pasting console error messages and screenshots back to the LLM.

faragon|10 months ago

The main use I find for LLMs is code review and corrections following a list of criteria. It helps to detect overlooked issues.

It is also useful for learning from independent code snippets, for e.g., learning a new API.

admiralrohan|10 months ago

Cost of AI coding tools may decrease in future making it more accessible for everyone. And we will all be forced to move up the value ladder.

visarga|10 months ago

We move up, down or sideways on the stack. That's the outcome. Not necessarily bad. It requires soul searching to find out new place.

BrenBarn|10 months ago

The idea of "breaking the game" here is similar to that expressed in this other recent post: https://news.ycombinator.com/item?id=43650656 . The focus here is a bit different though.

> It makes economic sense, and capitalism is not sentimental.

I find this kind of fatalism irritating. If capitalism isn't doing what we as humans want it to do, we can change it.

1oooqooq|10 months ago

my ai-pilled co worker committed some code using a promise with a lambda that resolved it in a one liner, the parameter was called resolve.

for some reason he also included a import for "resolve from dns".

(the code didn't even need a promise there)

ineedasername|10 months ago

Some people like to whittle wood. It’s no longer a career choice with strong prospects.

As for: ” In some countries, more than 90% of the population lives on less than $5 per day.”

Well, with the orders of magnitude difference already in place, this is not going to meaningfully impact that at all.

Im not dismissing this: I’m saying that it isn’t much of a building block in thinking about all of the things AI is going to change and should be addressed as a result because it’s simply in the pile of problems labeled “was here before, will be here after”.

And really, it ought to be thought of in the context of “can we leverage AI to help address this problem in ways we cannot do so now?”