I feel with people that say that "AI have take the fun out of programming" for them, but at the same time I think to myself: is it about doing, or is it about getting things done? Like I imagine someone in the past loved their job walking each night through their city, lighting up the gas-powered street lights. And then one day someone else implemented electric street lights, and the first person lost the job they loved. But in the end, its about providing light to the city streets, no?
For the great majority of work, it is not about fun, but about doing something other people need or want.
For me, AI allows me to realize my ideas, and get things done. Some of it might be good, some of it might be bad. I put at least as much time, attention and effort as the "real" programmers do, but my time goes into thinking and precisely defining what I want, cutting it up into smaller logical modules, testing, identifying and fixing bugs, iterating all the time.
> Like I imagine someone in the past loved their job walking each night through their city, lighting up the gas-powered street lights. And then one day someone else implemented electric street lights, and the first person lost the job they loved. But in the end, its about providing light to the city streets, no?
Lighting or extinguishing a gas lamp does not allow for creative expression.
Writing a program does.
The comparison is almost offensive.
> For the great majority of work, it is not about fun, but about doing something other people need or want.
Some of us write code for reasons that are not related to employment. The idea that someone else might find the software useful is speculative, and perhaps an expression of hubris; it's not the source of motivation.
> I put at least as much time, attention and effort as the "real" programmers do, but my time goes into thinking and precisely defining what I want, cutting it up into smaller logical modules, testing, identifying and fixing bugs, iterating all the time.
Who says the thing is done? there is a massive danger now, with the sheer amount of complexity & speed brought by ai, in that it's increasingly harder to verify / do proof-of-work.
>> AI allows me to realize my ideas
sure for a personal/pet project. however, when working for a customer/client, they've ideas, needs, wants and usually have their own users and shareholders to satisfy - need proof.
>> lighting up the gas-powered street lights
ok, no this metaphor may well be loved by ai companies, but doesn't actual work in so many levels. For one, ai (as actually provided) is not electricity or a physical system, a brain, or a mind, it's software (I use it v-selectively). Second, the job being done (lighting, or coding) is ultimately to produce / output the desired outcome for whoever ordered it - a solution to a problem - failing that it's just work and wages for the worker but no effective solution (lighting the dark side of the moon, kinda).
I agree with the OP, as system complexity went up, so does the ability to keep up.
>For the great majority of work, it is not about fun, but about doing something other people need or want
The essence of this, I think, is that a sense of craftsmanship and appreciation for craft often goes hand in hand with the ethos of learning and understanding what you are working with.
So there is the issue of who rightly deserves to get the satisfaction out of the getting things done. But there's also the fact that satisfaction goes hand in hand with craft, with knowledge. And that informs a perspective of being able to do things.
I finally read Adrift, 76 Days at Sea, a fantastic book about surviving in a life raft while drifting across the ocean. But the difference between life and death was an individual with an abundance of practical survival, sailing and navigation knowledge. So there's something to the idea of valuing the ability to call on hard earned deep knowledge, and a relationship to knowledge that doesn't abstract it away.
Almost paralleling questions of hosting your own data or entrusting it in centralized services.
I truly enjoy programming, but the most frustrating part for me was that I had many ideas and too little time to work on everything.
Thanks to AI I can now work on many side projects at the time, and most importantly just (as you mentioned) get stuff done quickly and most of the time in good enough (or sometimes excellent) results.
I'm both amazed and a bit sad, but the reality is that my output has increased significantly - although the quality might have dropped a bit in certain areas.
Time is limited, and if I can increase my results in the same way as the electric street lights, I can simply look back at the past and smile that I lived in a time where lighting up gas-powered street lights was considered a skill.
As you perfectly put it, it's not about the process per se, it's about the result. And the result is that now the lights are only 80% lit. In a few months / years we'll probably reach the threshold where the electric street lights will be brighter than the gas-powered ones, and you'd be a fool if you decide to still light them up one by one.
Making things is often not just about making the thing right in front of you, but about building the skills to make bigger and better things. When you consider the long view, the struggle that makes it harder to make the thing at hand is well worth it. We have long considered taking shortcuts that don’t build skills to be detrimental in the long term. This pretty much only stops being the case when the thing you are short cutting becomes totally irrelevant. We have yet to see how the atrophying of programming skills will affect our collective ability to make reliable and novel software.
In my experience, I have not seen much new software that I’m happy about that is the fruit of LLMs. I have experienced web apps that I’ve been using for years getting buggier.
The thing is that a large portion of what people are using AI (and tech in general) to do simply doesn't need to be done. We don't need a "smart" dental floss dispenser, or something that automatically buys toilet paper for you, or little Clippy-the-paper-clip bots popping up everywhere to ask if you need help. A lot of the tech that's coming out is a through-and-through waste of everyone's time and energy --- its users' as well as its makers'.
AI Coding has the same problem as "self driving cars".
Until the car can be completely trusted to drive itself and never need human intervention, the human has to stay in a weird state of not driving the car, but being completely alert and attentive and ready to resume control in an instant. This can be more tiring and stressful than just driving yourself.
Vibe coding is very similar. The AI can generate code at an astounding rate. But all of it has to be examined carefully for strange errors that a human would be very unlikely to make.
In both cases, it's very questionable whether there is significant savings in the time or attention of the human still in the loop vs just performing the activity completely by herself.
I fully and absolutely agree future is bright. Soon we can outsource both the work and the ideas to LLMs. Make fully automated system to produce fully complete novels, music, movies, videos and software. Just prompt AI to make a movie, book, music or even SaaS. No humans involved. Absolute superior system. Just instruct LLM to start producing programs and monetizing them. No ideas needed, no effort. No thought.
You can even source ideas from it. No need to think or have any personal input anymore.
Programming really is fascinating as a skill because it can bring so much joy to the practitioner on a day-to-day problem-solving level while also providing much value to companies that are using it to generate profit. How many other professions have this luxury?
As a result, though, I think AI taking over a lot of what we're able to do has the dual issue of making your day to day rough both as a personally-enriching experience but also as a money-making endeavor.
I've been reading The Machine That Changed the World recently and it talks about how Ford's mass production assembly line replaced craftsmen building cars by hand. It made me wonder if AI will end up replacing us programmers in a similar way. Craftsmen surely loved the act of building a vehicle, but once assembly lines came along, it no longer made sense to produce cars in that fashion since more unskilled labor could get the job done faster and cheaper. Will we get to a place where AI is "good enough" to replace most developers? You could always argue that craftspeople could generate better code, but I can see a future where that becomes a luxury and unnecessary if tools do most of the work well enough.
How people derive utility varies from person to person and I suspect is the root cause of most AI generation pipeline debates, creative and code-wise. There are two camps that are surprisingly mutually exclusive:
a) People who gain value from the process of creating content.
b) People who gain value from the end result itself.
I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Also, getting frustrated with code configuration and writing boilerplate code is not personally gratifying.
As much as I dislike not having a good mental model of all the code that does things for me, ultimately, I have to concede the battle to get things done. This is not that different from importing packages that someone else wrote, or relying on codebases of my colleagues.
That said, even if I have to temporarily give up on understanding, I don't believe there's any reason to completely surrender control. I'll call a technician when things need fixing right away, but that doesn't mean I shouldn't learn (some of) the fixes myself.
> is it about doing, or is it about getting things done?
It's both. When you climb a mountain, the joy is reaching the summit after the hard hike. The hike is hard but also enjoyable in itself, and makes you appreciate reaching the top even more.
If there's a cable car or a road leading to the summit, the view may still be nice, but I'll go hiking somewhere else.
This reminds me of the debate around Soylent when that came out. Are meals for enjoyment, flavour, and the experience or are they about consuming nutrients and providing energy?
I’d say that debate was largely philosophical with proponents on both sides. And really the answer might be that both things are true for different people at different times. Though I also observe that soylent did not, by and large, end up replacing meals for the vast majority.
The correct analogy would be that half of the lights wouldn't light up randomly and then you'd have to go out anyway but in a hurry and only to certain ones just do discover you need to get back 20 minutes later because there is another problem with the same light, and your boss would expect that you do everything much faster and you end up frustrated even more.
I was reflecting on this yesterday, as I have often hated AI for generating emails and other written text, but kinda am loving it for writing code.
One realization was what you said about me just wanting the code done so I can use the app.
The second was that, for me, I care about the output of the code, not the code itself. Whereas with the written word, I care about the word. Perhaps if I used AI to summarize what someone wanted in the email then I would care less about the written word coming from a human, but right now I still want to read what they've written. You can say that there are programmers who want to read the code from someone else, but I don't think there's the equivalent of code abstracted away into a UI that exists for the written word (open to that being challenged).
The last and maybe biggest realization is that computer language exists as multiple levels of abstraction. Machine language, assembly language, high-level language, etc. I'm not sure human languages have as many layers of abstraction, or if they do, they exist within the same language.
I'll keep reflecting, just my short two cents for now.
> is it about doing, or is it about getting things done?
No, this is a false dichotomy and slippery slope dangerous thinking.
It’s about building a world where we can all live in and find meaning, joy, dignity, and fulfillment, which requires a balance between pursuing the ends and preserving the means as worthwhile human pursuits.
If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.
Human society and civilization is for the benefit of humans, not for filling checkboxes above all else.
I am reminded of Dijkstra's remark on Lisp, that it "has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts."
(I imagine that this is not limited to Lisp, though some languages may yield more or less results.)
If we consider programming entirely as a means to and end, with the end being all that matters, we may lose out on insights obtained while doing the work. Whether if those insights are of practical value, or economic value, or of no value at all, is another question, but I feel there is more likely to be something gained by actually doing the programming, compared to actually lighting the street lamps.
(Of course, what you are programming matters too. Many were quick to turn to AI for "boilerplate"; I doubt many insights are found in such code.)
Daniel Pink's book "Drive" explains that true motivation comes from intrinsic factors: autonomy, mastery, and purpose. It’s not about external rewards or doing every task yourself, but about having the freedom to direct your work, the drive to improve your skills, and a meaningful purpose behind what you do. In programming, AI can free us from routine tasks, letting us focus on creative problem-solving and realizing our ideas - this aligns perfectly with what Pink calls the deeper, more fulfilling motivation to get things done in a way that matters. So, it’s less about losing fun and more about shifting to meaningful engagement and impact.
> is it about doing, or is it about getting things done?
For me it is getting things done while also understanding the whole building, from its foundation up. Only with such a comprehensive mental model can I predict how my code will behave in unanticipated situations. I've only ever achieved this metal model by doing.
Succinctly, "it is about doing" to guarantee I'm "getting things really done".
> my time goes into thinking and precisely defining what I want
I'm reminded of the famous quote "Programs must be written for people to read, and only incidentally for machines to execute." [1]
A programming language is exactly the medium that lets me precisely define my thoughts! I think the only way to achieve equivalent precision using human language is to write them in legalese, just as a lawyer does when poring over the words and punctuation in a legal contract (and that depends upon so much case law to make the words really precise).
> For me, AI allows me to realize my ideas, and get things done.
More power to you! Bringing our ideas to life is what we're all after.
I agree wholeheartedly that it's about getting things done, and is what the universe cares about. As individuals we enjoy being in flow, and when the nature of the work changes we may lose our flow and shake our fists in frustration...
Change can be painful, but that's because it takes energy.
From particles to atoms to cells to people to civilizations, it seems like the whole point is to get more stuff done. Why? Probably because getting stuff done is more interesting than the alternative.
While I agree with your point that it's sometimes about getting things done, but your example is flawed. Your example about gas-powered street lights is arguing for technology evolution. But the people who say "AI have take the fun out of programming" are fighting for craftsmanship and love.
Nobody ever found craftsmanship or pleasure out of lighting up gas-powered street lights. But there are a lot of programmers that value "doing" programming because it's their craft or art-form.
I have never had a programming job. But I program all day to serve my customers for the products I created. Because it's my art-form. I love "doing" it (my way!).
It will get done. I just want to be the person to do it.
Once again no one is capable of coming up with a good analogy: the analogy here would be that someone comes up with occasional exploding electrical lights that sometimes create black holes to suck up all the surrounding light for a block, and then really work as intended under 60% of the time. But the city rushes to implement as recklessly and quickly as possible because promises and lies. Also the whole time its happening they keep saying not a single gas-lighter will lose their job because the blackholes need to be fed human flesh sometimes... so we will get them to do that
so far the economy is not build on getting things done alone
the promise of progress is that not having to do chores will make us happier, it's partly true, and partly false
people hate doing too much of too harmful things, beside that if you need me to redo your shelves, or help you get milk in the morning, i'm happy to oblige
but back to the point of things getting done and the march of progress, we're entering a potential kurzweil runaway, where computer understand and operate on the world faster, better and longer than us, leaving us with having nothing to do, so we'll see, but i'm not betting a lot on that, it's gonna be toxic (big 4 becoming our main dependency, unstable and a potential depression frenzy)
look at how often people say "i wanna do something that matters", "i wanna help others".. it's a bit strange to say this because we spend our lives maintaining the worlds to be comfortable, but having everything done for you all the time might not be heaven on earth (even in the ideal best case)
Sometimes I read something on the internet and I think: finally someone has articulated something the way that I think about it. And it is very validating. And it cuts through a bunch of noise about how "oh you should be tuning and tweaking this prompt and that" and really speaks to the human experience. Thanks for this.
Same. After using AI for too long I get the same mental feeling as I do when scrolling endlessly on YouTube, a listless empty purposeless feeling that I find difficult to break out of without a whole night's rest.
> After about 3-4k lines of code I completely lost track of what is going on... Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code
It's hard to take very much away from somebody else's experiences in this area. Because if you've been doing a substantial amount of AI coding this year, you know that the experience is highly dependent on your approach.
How do you structure your prompts? How much planning do you do? How do you do that planning? How much review do you do, and how do you do it? Just how hands-on or hands-off are you? What's in your AGENTS.md or equivalent? What other context do you include, when, why, and how? What's your approach to testing, if any? Do you break down big projects into smaller chunks, and if so, how? How fast vs slow are you going, i.e. how many lines of code are you letting the AI write in any given time period? Etc.
The answers to these questions vary extremely wildly from person to person.
But I suspect a ton of developers who are having terrible experiences with AI coding are quite new to it, have minimal systems in place, and are trying "vibe coding" in the original sense of the phrase, which is to rapidly prompt the LLM with minimal guidance and blindly trust its code. In which case, yeah, that's not going to give you great results.
I think that the important conclusion to make of this is that publicly available code is not created or even curated by humans anymore, and it will be fed back into data sets for training.
It's not clear what the consequences are. Maybe not much, but there's not that much actual emergent intelligence in LLMs, so without culling by running the code there's seems to be a risk that the end result is a world full of even more nonsense than today.
This already happened a couple of years ago for research on word frequency in published texts. I think the consensus is that there's no point in collecting anymore since all available material is tainted by machine generated content and doesn't reflect human communication.
I have felt similiar thoughts. You start off with a mental model of how to develop an app based on experience. You can quickly get the pieces working and wire them up.
What get's lost is when you normally develop an app that takes days you create a mind model as you go along that you take with you throughout the day. In the shower you may connect some dots and reimagine the pieces in a more compelling way. When the project is done you have mental model of all of the different pieces; thoughts of where to expand and fears of where you know the project will bottleneck with a mental note to circle back when you can.
When you vibe code you don't get the same highs and lows. You don't mentally map each piece. It's not a surprise that opening up and reading the code is the most painful thing but reading my own code is always a joy.
Pretty much my experience, LLMs have taken the fun out of programming for me. My coding sessions are:
1. write prompt
2. slack a few minutes
3. go to 1
4. send code for review
I know what the code is doing, how I want it to look eventually, and my commits are small and self-contained, but I don't understand my code as much because I didn't spend so much time manipulate it. Often I spend more time in my loops than if I was writing the code myself.
I'm sure that with the right discipline, it's possible to tame the LLM, but I've not been able to reach that stage yet.
In code, one way I’ve found to ground the model and make its output trustworthy is test-driven development.
Make it write the tests first. Make it watch the tests fail. Make it assert to itself that they fail for the RIGHT reason. Make it write the code. Make it watch the tests pass. Learn how to provide it these instructions and then take yourself out of the loop.
When you’re done you’ve created an artefact of documentation at a microscopic level of how the code should behave, which forms a reference for yourself and future agents for the life of the codebase.
Reading through to the end of the README.md on the GitHub page, I noticed that he's claiming copyright on the code, even though he admits that 3/4 of it is machine generated, and he doesn't understand it all.
It reminded me of the legal challenges for copyright of content that was not created by a human. In every case that I'm aware of so far, courts have ruled that content that wasn't created by a person cannot be copyrighted.
A key -- perhaps THE key -- remark here, IMO is the following:
> I do want to make things, and many times I dont want to know something, but I want to use it
This confesses the desire to make, to use, and to make use of, without ANY substantive understanding.
Of course this seems attractive for some reasons, but it is a wrong, degenerative way to be in the world. Thinking and being belong together. Knowing and using are two dimensions of the same activity.
The way of these tools is a making without understanding, a using without learning, a way of being that is thoughtless.
There's nothing preventing us from thoughtful, rigorous, enriching use of generative ML, except that the systems we live and work in don't want us to be thoughtful and enriched and rigorous. They want us pliant and reactive and automated and sloppy.
Given the code has been completely vibe-coded, what does this mean in practice?:
> Copyright (c) 2025
Whose copyright? IIRC, it is consensus that AI cannot create copyrightable works. If the author does not own the copyright, can they add a legally binding license? If not, does this have any legal meaning?:
> IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY
> After about 3-4k lines of code I completely lost track of what is going on, and I woudn't consider this code that I have written, but adding more and more tests felt "nice", or at least reassuring.
> There was a some gaslighting, particularly when it misunderstood dap_read_mem32 thinking it is reading from ram and not MEM-AP TAR/DRW/RDBUFF protocol, which lead to incredible amount of nonsense.
> Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code, I don't consider this my project, and I have no sense of acomplishment or growth.
Ah yes, we can now mass produce faulty code, we feel even more alienated from our work, the sense of achievement gets taken away, no ownership, barely any skill growth. Wonderful technology. What a time to bring value to the shareholders!
Whilst I ostensibly agree with the sentiment of the linked page, my personal experience is very different - my suspicion is due to the different technologies at play
I enjoy building little SaaS side hustles that one day (I can dream) might make me a couple of grand, but I don’t enjoy writing 20+ CRUD controllers, with matching validation, and HTML forms. I’m probably a bit neurospicy, and I have a young family, but before LLMs came along I might “finish” one SaaS every couple of years. I’ve been able to complete 3 so far this year. It’s a wild uptick in productivity.
I’m well aware of the dangers that come with it too, but having been in the mines churning out this code for the last couple of decades I feel well versed in what to prompt for, just as I would with a keen yet naive junior engineer. I’d also argue that LLMs are much better at enforcing a particular style on the code base.
I feel strongly that with an opinionated framework, in a relatively simple language, solving repetitive simple problems - you’ll have a great time with LLMs and you’ll be more productive than ever.
The problems arise when we delegate jobs like writing READMEs or tests (the boring stuff, right?) without really getting into the weeds.
Vibe coding is cursed gold from the first 'Pirates of the Caribbean' movie.
> "For too long I've been parched of thirst and unable to quench it. Too long I've been starving to death and haven't died. I feel nothing. Not the wind on my face nor the spray of the sea. Nor the warmth of a woman's flesh."
[steps into moonlight becoming a skeleton]
That was a bit overdramatic, I think. But it does mesh with my experience, though as a robot of course I say this with a lot less emotion:
Use LLMs for "compressing and understanding large amounts of existing code", autocomplete, and "vibe coding prototypes, especially for non-programmers". Do not use LLMs for "vibe coding production projects".
Vibe coding sucks at this moment in time. On the other hand, when was the last time you looked at assembler code and thought, mmmm, I do not like the style? There is also room for optimization. If i would have written this myself, it would be way faster.
This reflects my XP as well: use LLMs for semantic search. Do not trust it with your code.
> Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code, I don't consider this my project, and I have no sense of acomplishment or growth.
> In contrast, using AI to read all the docs (which are thousands of pages) and write helpful scripts to decode the oscilloscope data, create packed C structs from docs and etc, was very nice, and I did feel good after.
Vibe coding in the sense of handing all responsibility and accountability for the code in a change request over to AI and then claiming the bad code is the fault of AI is not a thing. It's still your change request regardless of how you created it. If you write every line it's yours. If you copy it from SO into your editor and committed it, that's your choices, and therefore your code. If you prompted an LLM to write something, you are responsible for that.
If there is AI slop in your codebase it is only because you put it there.
Many years ago (early 2000s), I had to write a tool to scrape Yahoo message boards. The business was that folks were running "pump and dump" scams on the finance boards. The companies whose stock was being "pumped" hired law firms who, in turn, hired the company I worked for.
I was VERY new to Perl and didn't realize that LWP::simple already existed. I therefore ended up writing my own library using TCP socket handling and sending GET requests "by hand".
It was a great learning experience and taught me a lot about how message boards, TCP and HTTP work. At the same time, it was slow, took a lot of time and had limited features and very little error handling.
I now use Python's requests module all the time and have never, not ever, thought "I should go peak inside the library to see how it actually works under the hood".
My point in this story is that LLMs will probably move us more and more towards "AI as library". Sure, if you are writing super higher performant code that ties tightly to hardware you might still dig down into the details.
Most of us will probably just use the next generation "library".
> I don't consider this my project, and I have no sense of acomplishment or growth.
Trigger warning incoming... if you are in a for-profit company, does the business really care whether you feel accomplished as long as you are producing code? As an analog - the assembly line worker on a highly automated Tesla assembly line is essentially a replaceable commodity at this point.
> The main issue is taste, when I write code I feel if its good or bad, as I am writing it, I know if its wrong, but using claude code I get desensitized very quickly and I just can't tell, it "reads" OK, but I don't know how it feels. In this case it happened when the code grew about 4x, from 1k to 4k lines. And worse of all, my mental model of the code is completely gone, and with it my ownership.
Does the code work? If so, why does any of this matter?
In an age of automated manufacturing, I've noticed more and more independent wood workers. This is okay - but you aren't going to supply the world's furniture needs with thousands or hundreds of thousands of artisan wood workers.
[+] [-] iammjm|4 months ago|reply
[+] [-] zahlman|4 months ago|reply
Lighting or extinguishing a gas lamp does not allow for creative expression.
Writing a program does.
The comparison is almost offensive.
> For the great majority of work, it is not about fun, but about doing something other people need or want.
Some of us write code for reasons that are not related to employment. The idea that someone else might find the software useful is speculative, and perhaps an expression of hubris; it's not the source of motivation.
> I put at least as much time, attention and effort as the "real" programmers do, but my time goes into thinking and precisely defining what I want, cutting it up into smaller logical modules, testing, identifying and fixing bugs, iterating all the time.
So does the time of the "real programmers".
[+] [-] al_be_back|4 months ago|reply
Who says the thing is done? there is a massive danger now, with the sheer amount of complexity & speed brought by ai, in that it's increasingly harder to verify / do proof-of-work.
>> AI allows me to realize my ideas
sure for a personal/pet project. however, when working for a customer/client, they've ideas, needs, wants and usually have their own users and shareholders to satisfy - need proof.
>> lighting up the gas-powered street lights
ok, no this metaphor may well be loved by ai companies, but doesn't actual work in so many levels. For one, ai (as actually provided) is not electricity or a physical system, a brain, or a mind, it's software (I use it v-selectively). Second, the job being done (lighting, or coding) is ultimately to produce / output the desired outcome for whoever ordered it - a solution to a problem - failing that it's just work and wages for the worker but no effective solution (lighting the dark side of the moon, kinda).
I agree with the OP, as system complexity went up, so does the ability to keep up.
[+] [-] glenstein|4 months ago|reply
The essence of this, I think, is that a sense of craftsmanship and appreciation for craft often goes hand in hand with the ethos of learning and understanding what you are working with.
So there is the issue of who rightly deserves to get the satisfaction out of the getting things done. But there's also the fact that satisfaction goes hand in hand with craft, with knowledge. And that informs a perspective of being able to do things.
I finally read Adrift, 76 Days at Sea, a fantastic book about surviving in a life raft while drifting across the ocean. But the difference between life and death was an individual with an abundance of practical survival, sailing and navigation knowledge. So there's something to the idea of valuing the ability to call on hard earned deep knowledge, and a relationship to knowledge that doesn't abstract it away.
Almost paralleling questions of hosting your own data or entrusting it in centralized services.
[+] [-] denysvitali|4 months ago|reply
Thanks to AI I can now work on many side projects at the time, and most importantly just (as you mentioned) get stuff done quickly and most of the time in good enough (or sometimes excellent) results.
I'm both amazed and a bit sad, but the reality is that my output has increased significantly - although the quality might have dropped a bit in certain areas.
Time is limited, and if I can increase my results in the same way as the electric street lights, I can simply look back at the past and smile that I lived in a time where lighting up gas-powered street lights was considered a skill.
As you perfectly put it, it's not about the process per se, it's about the result. And the result is that now the lights are only 80% lit. In a few months / years we'll probably reach the threshold where the electric street lights will be brighter than the gas-powered ones, and you'd be a fool if you decide to still light them up one by one.
[+] [-] Vegenoid|4 months ago|reply
In my experience, I have not seen much new software that I’m happy about that is the fruit of LLMs. I have experienced web apps that I’ve been using for years getting buggier.
[+] [-] dlisboa|4 months ago|reply
That's OK, but surely you can see how painters wouldn't enjoy that in the slightest.
[+] [-] BrenBarn|4 months ago|reply
[+] [-] jimbokun|4 months ago|reply
Until the car can be completely trusted to drive itself and never need human intervention, the human has to stay in a weird state of not driving the car, but being completely alert and attentive and ready to resume control in an instant. This can be more tiring and stressful than just driving yourself.
Vibe coding is very similar. The AI can generate code at an astounding rate. But all of it has to be examined carefully for strange errors that a human would be very unlikely to make.
In both cases, it's very questionable whether there is significant savings in the time or attention of the human still in the loop vs just performing the activity completely by herself.
[+] [-] Ekaros|4 months ago|reply
You can even source ideas from it. No need to think or have any personal input anymore.
[+] [-] allenu|4 months ago|reply
As a result, though, I think AI taking over a lot of what we're able to do has the dual issue of making your day to day rough both as a personally-enriching experience but also as a money-making endeavor.
I've been reading The Machine That Changed the World recently and it talks about how Ford's mass production assembly line replaced craftsmen building cars by hand. It made me wonder if AI will end up replacing us programmers in a similar way. Craftsmen surely loved the act of building a vehicle, but once assembly lines came along, it no longer made sense to produce cars in that fashion since more unskilled labor could get the job done faster and cheaper. Will we get to a place where AI is "good enough" to replace most developers? You could always argue that craftspeople could generate better code, but I can see a future where that becomes a luxury and unnecessary if tools do most of the work well enough.
[+] [-] minimaxir|4 months ago|reply
a) People who gain value from the process of creating content.
b) People who gain value from the end result itself.
I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Also, getting frustrated with code configuration and writing boilerplate code is not personally gratifying.
[+] [-] TomasBM|4 months ago|reply
As much as I dislike not having a good mental model of all the code that does things for me, ultimately, I have to concede the battle to get things done. This is not that different from importing packages that someone else wrote, or relying on codebases of my colleagues.
That said, even if I have to temporarily give up on understanding, I don't believe there's any reason to completely surrender control. I'll call a technician when things need fixing right away, but that doesn't mean I shouldn't learn (some of) the fixes myself.
[+] [-] yodsanklai|4 months ago|reply
It's both. When you climb a mountain, the joy is reaching the summit after the hard hike. The hike is hard but also enjoyable in itself, and makes you appreciate reaching the top even more.
If there's a cable car or a road leading to the summit, the view may still be nice, but I'll go hiking somewhere else.
[+] [-] tobyjsullivan|4 months ago|reply
I’d say that debate was largely philosophical with proponents on both sides. And really the answer might be that both things are true for different people at different times. Though I also observe that soylent did not, by and large, end up replacing meals for the vast majority.
[+] [-] dvfjsdhgfv|4 months ago|reply
[+] [-] jimkleiber|4 months ago|reply
One realization was what you said about me just wanting the code done so I can use the app.
The second was that, for me, I care about the output of the code, not the code itself. Whereas with the written word, I care about the word. Perhaps if I used AI to summarize what someone wanted in the email then I would care less about the written word coming from a human, but right now I still want to read what they've written. You can say that there are programmers who want to read the code from someone else, but I don't think there's the equivalent of code abstracted away into a UI that exists for the written word (open to that being challenged).
The last and maybe biggest realization is that computer language exists as multiple levels of abstraction. Machine language, assembly language, high-level language, etc. I'm not sure human languages have as many layers of abstraction, or if they do, they exist within the same language.
I'll keep reflecting, just my short two cents for now.
[+] [-] gyomu|4 months ago|reply
No, this is a false dichotomy and slippery slope dangerous thinking.
It’s about building a world where we can all live in and find meaning, joy, dignity, and fulfillment, which requires a balance between pursuing the ends and preserving the means as worthwhile human pursuits.
If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.
Human society and civilization is for the benefit of humans, not for filling checkboxes above all else.
[+] [-] tjr|4 months ago|reply
(I imagine that this is not limited to Lisp, though some languages may yield more or less results.)
If we consider programming entirely as a means to and end, with the end being all that matters, we may lose out on insights obtained while doing the work. Whether if those insights are of practical value, or economic value, or of no value at all, is another question, but I feel there is more likely to be something gained by actually doing the programming, compared to actually lighting the street lamps.
(Of course, what you are programming matters too. Many were quick to turn to AI for "boilerplate"; I doubt many insights are found in such code.)
[+] [-] _betty_|4 months ago|reply
[+] [-] sanjayjc|4 months ago|reply
For me it is getting things done while also understanding the whole building, from its foundation up. Only with such a comprehensive mental model can I predict how my code will behave in unanticipated situations. I've only ever achieved this metal model by doing.
Succinctly, "it is about doing" to guarantee I'm "getting things really done".
> my time goes into thinking and precisely defining what I want
I'm reminded of the famous quote "Programs must be written for people to read, and only incidentally for machines to execute." [1]
A programming language is exactly the medium that lets me precisely define my thoughts! I think the only way to achieve equivalent precision using human language is to write them in legalese, just as a lawyer does when poring over the words and punctuation in a legal contract (and that depends upon so much case law to make the words really precise).
> For me, AI allows me to realize my ideas, and get things done.
More power to you! Bringing our ideas to life is what we're all after.
[1] https://web.archive.org/web/20180427140749/https://mitpress....
[+] [-] FloorEgg|4 months ago|reply
Change can be painful, but that's because it takes energy.
From particles to atoms to cells to people to civilizations, it seems like the whole point is to get more stuff done. Why? Probably because getting stuff done is more interesting than the alternative.
[+] [-] Loxicon|4 months ago|reply
Nobody ever found craftsmanship or pleasure out of lighting up gas-powered street lights. But there are a lot of programmers that value "doing" programming because it's their craft or art-form.
I have never had a programming job. But I program all day to serve my customers for the products I created. Because it's my art-form. I love "doing" it (my way!).
It will get done. I just want to be the person to do it.
[+] [-] boesboes|4 months ago|reply
None of it will be good, all of it will be bad. mmw
[+] [-] beefnugs|4 months ago|reply
[+] [-] wartywhoa23|4 months ago|reply
Forgive me some frivolousness and let me reduce this ad the inevitable absurdum:
Is life about living, or is it about getting life ended?
[+] [-] agumonkey|4 months ago|reply
the promise of progress is that not having to do chores will make us happier, it's partly true, and partly false
people hate doing too much of too harmful things, beside that if you need me to redo your shelves, or help you get milk in the morning, i'm happy to oblige
but back to the point of things getting done and the march of progress, we're entering a potential kurzweil runaway, where computer understand and operate on the world faster, better and longer than us, leaving us with having nothing to do, so we'll see, but i'm not betting a lot on that, it's gonna be toxic (big 4 becoming our main dependency, unstable and a potential depression frenzy)
look at how often people say "i wanna do something that matters", "i wanna help others".. it's a bit strange to say this because we spend our lives maintaining the worlds to be comfortable, but having everything done for you all the time might not be heaven on earth (even in the ideal best case)
[+] [-] cnity|4 months ago|reply
[+] [-] all2|4 months ago|reply
[+] [-] cyanydeez|4 months ago|reply
Others see its mostly a slot machine that more often than not gives you almost right answers.
Knowing how the psychology of gambling machine design is maybe a big barrier between these people.
[+] [-] tekbruh9000|4 months ago|reply
[deleted]
[+] [-] NewsaHackO|4 months ago|reply
[+] [-] csallen|4 months ago|reply
It's hard to take very much away from somebody else's experiences in this area. Because if you've been doing a substantial amount of AI coding this year, you know that the experience is highly dependent on your approach.
How do you structure your prompts? How much planning do you do? How do you do that planning? How much review do you do, and how do you do it? Just how hands-on or hands-off are you? What's in your AGENTS.md or equivalent? What other context do you include, when, why, and how? What's your approach to testing, if any? Do you break down big projects into smaller chunks, and if so, how? How fast vs slow are you going, i.e. how many lines of code are you letting the AI write in any given time period? Etc.
The answers to these questions vary extremely wildly from person to person.
But I suspect a ton of developers who are having terrible experiences with AI coding are quite new to it, have minimal systems in place, and are trying "vibe coding" in the original sense of the phrase, which is to rapidly prompt the LLM with minimal guidance and blindly trust its code. In which case, yeah, that's not going to give you great results.
[+] [-] kpil|4 months ago|reply
It's not clear what the consequences are. Maybe not much, but there's not that much actual emergent intelligence in LLMs, so without culling by running the code there's seems to be a risk that the end result is a world full of even more nonsense than today.
This already happened a couple of years ago for research on word frequency in published texts. I think the consensus is that there's no point in collecting anymore since all available material is tainted by machine generated content and doesn't reflect human communication.
[+] [-] ipaddr|4 months ago|reply
What get's lost is when you normally develop an app that takes days you create a mind model as you go along that you take with you throughout the day. In the shower you may connect some dots and reimagine the pieces in a more compelling way. When the project is done you have mental model of all of the different pieces; thoughts of where to expand and fears of where you know the project will bottleneck with a mental note to circle back when you can.
When you vibe code you don't get the same highs and lows. You don't mentally map each piece. It's not a surprise that opening up and reading the code is the most painful thing but reading my own code is always a joy.
[+] [-] yodsanklai|4 months ago|reply
1. write prompt
2. slack a few minutes
3. go to 1
4. send code for review
I know what the code is doing, how I want it to look eventually, and my commits are small and self-contained, but I don't understand my code as much because I didn't spend so much time manipulate it. Often I spend more time in my loops than if I was writing the code myself.
I'm sure that with the right discipline, it's possible to tame the LLM, but I've not been able to reach that stage yet.
[+] [-] cadamsdotcom|4 months ago|reply
In code, one way I’ve found to ground the model and make its output trustworthy is test-driven development.
Make it write the tests first. Make it watch the tests fail. Make it assert to itself that they fail for the RIGHT reason. Make it write the code. Make it watch the tests pass. Learn how to provide it these instructions and then take yourself out of the loop.
When you’re done you’ve created an artefact of documentation at a microscopic level of how the code should behave, which forms a reference for yourself and future agents for the life of the codebase.
[+] [-] anonymousiam|4 months ago|reply
It reminded me of the legal challenges for copyright of content that was not created by a human. In every case that I'm aware of so far, courts have ruled that content that wasn't created by a person cannot be copyrighted.
[+] [-] abathologist|4 months ago|reply
> I do want to make things, and many times I dont want to know something, but I want to use it
This confesses the desire to make, to use, and to make use of, without ANY substantive understanding.
Of course this seems attractive for some reasons, but it is a wrong, degenerative way to be in the world. Thinking and being belong together. Knowing and using are two dimensions of the same activity.
The way of these tools is a making without understanding, a using without learning, a way of being that is thoughtless.
There's nothing preventing us from thoughtful, rigorous, enriching use of generative ML, except that the systems we live and work in don't want us to be thoughtful and enriched and rigorous. They want us pliant and reactive and automated and sloppy.
We don't have to bend to their wants tho.
[+] [-] raphman|4 months ago|reply
> Copyright (c) 2025
Whose copyright? IIRC, it is consensus that AI cannot create copyrightable works. If the author does not own the copyright, can they add a legally binding license? If not, does this have any legal meaning?:
> IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY
[+] [-] stock_toaster|4 months ago|reply
[1]: https://en.wikipedia.org/wiki/English_as_She_Is_Spoke
[+] [-] mronetwo|4 months ago|reply
> There was a some gaslighting, particularly when it misunderstood dap_read_mem32 thinking it is reading from ram and not MEM-AP TAR/DRW/RDBUFF protocol, which lead to incredible amount of nonsense.
> Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code, I don't consider this my project, and I have no sense of acomplishment or growth.
Ah yes, we can now mass produce faulty code, we feel even more alienated from our work, the sense of achievement gets taken away, no ownership, barely any skill growth. Wonderful technology. What a time to bring value to the shareholders!
[+] [-] agjmills|4 months ago|reply
I enjoy building little SaaS side hustles that one day (I can dream) might make me a couple of grand, but I don’t enjoy writing 20+ CRUD controllers, with matching validation, and HTML forms. I’m probably a bit neurospicy, and I have a young family, but before LLMs came along I might “finish” one SaaS every couple of years. I’ve been able to complete 3 so far this year. It’s a wild uptick in productivity.
I’m well aware of the dangers that come with it too, but having been in the mines churning out this code for the last couple of decades I feel well versed in what to prompt for, just as I would with a keen yet naive junior engineer. I’d also argue that LLMs are much better at enforcing a particular style on the code base. I feel strongly that with an opinionated framework, in a relatively simple language, solving repetitive simple problems - you’ll have a great time with LLMs and you’ll be more productive than ever.
The problems arise when we delegate jobs like writing READMEs or tests (the boring stuff, right?) without really getting into the weeds.
[+] [-] mcalus3|4 months ago|reply
> "For too long I've been parched of thirst and unable to quench it. Too long I've been starving to death and haven't died. I feel nothing. Not the wind on my face nor the spray of the sea. Nor the warmth of a woman's flesh." [steps into moonlight becoming a skeleton]
[+] [-] 1gn15|4 months ago|reply
Use LLMs for "compressing and understanding large amounts of existing code", autocomplete, and "vibe coding prototypes, especially for non-programmers". Do not use LLMs for "vibe coding production projects".
[+] [-] stpedgwdgfhgdd|4 months ago|reply
l'histoire se répète
[+] [-] mentalgear|4 months ago|reply
> Overall I would say it was a horrible experience, even though it took 10 hours to write close to 10000 lines of code, I don't consider this my project, and I have no sense of acomplishment or growth.
> In contrast, using AI to read all the docs (which are thousands of pages) and write helpful scripts to decode the oscilloscope data, create packed C structs from docs and etc, was very nice, and I did feel good after.
[+] [-] onion2k|4 months ago|reply
No.
Vibe coding in the sense of handing all responsibility and accountability for the code in a change request over to AI and then claiming the bad code is the fault of AI is not a thing. It's still your change request regardless of how you created it. If you write every line it's yours. If you copy it from SO into your editor and committed it, that's your choices, and therefore your code. If you prompted an LLM to write something, you are responsible for that.
If there is AI slop in your codebase it is only because you put it there.
[+] [-] alexpotato|4 months ago|reply
I was VERY new to Perl and didn't realize that LWP::simple already existed. I therefore ended up writing my own library using TCP socket handling and sending GET requests "by hand".
It was a great learning experience and taught me a lot about how message boards, TCP and HTTP work. At the same time, it was slow, took a lot of time and had limited features and very little error handling.
I now use Python's requests module all the time and have never, not ever, thought "I should go peak inside the library to see how it actually works under the hood".
My point in this story is that LLMs will probably move us more and more towards "AI as library". Sure, if you are writing super higher performant code that ties tightly to hardware you might still dig down into the details.
Most of us will probably just use the next generation "library".
[+] [-] mbesto|4 months ago|reply
Trigger warning incoming... if you are in a for-profit company, does the business really care whether you feel accomplished as long as you are producing code? As an analog - the assembly line worker on a highly automated Tesla assembly line is essentially a replaceable commodity at this point.
> The main issue is taste, when I write code I feel if its good or bad, as I am writing it, I know if its wrong, but using claude code I get desensitized very quickly and I just can't tell, it "reads" OK, but I don't know how it feels. In this case it happened when the code grew about 4x, from 1k to 4k lines. And worse of all, my mental model of the code is completely gone, and with it my ownership.
Does the code work? If so, why does any of this matter?
In an age of automated manufacturing, I've noticed more and more independent wood workers. This is okay - but you aren't going to supply the world's furniture needs with thousands or hundreds of thousands of artisan wood workers.