> Regardless, the lesson for people like myself is that, in order to feel happy with creating, we have to actually create. An artist would not call their work art if they had little to no role in creating it.
Thanks. The author touched something there, close to a truth (or deep belief I got ?) about our life, something about the journey mattering more than the destination...
I think the author's definition of "creating" is just too narrow. A gardener can get tremendous satisfaction from watching their plants grow from the bed of soil that they prepared, even if there is not as much weeding or watering to do later on in the growth cycle. A parent can get tremendous satisfaction from watching their child continue to grow and develop, even after the child is no longer an infant who requires constant care and attention.
In my opinion, having spent about a year and half working on various coding projects using AI, there are phases to the AI coding lifecycle.
1) Coding projects start out like infants: you need to write a lot of code by hand at first to set the right template and patterns you want the AI to follow going forward.
2) Coding projects continue to develop kind of like garden beds: you have to guide the structure and provide the right "nutrients" for the project, so that the AI can continue to add additional features based on what you have supplied to it.
3) Coding projects mature kind of like children growing up to become adults. A well configured AI agent, starting from a clean, structured code repo, might be mostly autonomous, but just like your adult kid might still need to phone home to Mom and Dad to ask for advice or help, you as the "parent" of the project are still going to be involved when the AI gets stuck and needs help.
Personally, while I can get some joy and satisfaction from manually typing lines of code, most of those lines of code are things I've typed literally hundreds of times over my decades long journey as a developer. There isn't as much joy in typing out the same things again and again, but there is joy in the longer term steering and shaping of a project so that it stays sane, clean, and scalable. I get a similar same sense of joy out of gently steering AI towards success in my projects, that I get from gently steering my own child towards success. There is something incredible about providing the right environment and the right pushes in the right direction, and then seeing something grow and develop mostly on it's own (but with your support backing it up).
Yeah that captures what I've been feeling: our work is changing from being craftsmen to managers.
Engineering used to be my go-to to enter flow state. Now, I spend a few minutes thinking about what I want and then a lot of time babysitting Claude code -- similar to experiences here.
Has anyone found a way to make the "manager" part feel as engaging and creative as the "craftsman" part used to?
Good. Enjoy that journey... on your own time. You've been missing your productivity OKRs, and your Claude logs say you haven't been using the tools the company has provided. You're on a PIP: if measurable progress is not seen in 30 days, disciplinary action up to and including termination may be taken.
> I wonder if some “actual" artists (as in, those people who create the kind of art most people would recognize) have gone through a similar arc of realizing the emptiness of creating with AI tools.
My impression is that artists are even more hostile than the most AI-skeptic of software engineers. In large part, this is likely because the economic argument doesn't hold much sway. For the large majority of artists, it's hard for them to make money with art as is, the bottleneck is not the volume of art they can produce. There's a much clearer path to turning "more code" into "more money", even if it's still not direct.
Perhaps that's why I as a software developer am fully genAI-skeptic…I've always considered myself a multidisciplinary artist and the skill I have in writing code is simply one of the many possible avenues I use to express myself. (Alas, it's the one which produces the most income by far, but that's another conversation!)
I haven't had nearly the same experience of success with AI.
I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt
My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do
I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between
I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!
The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!
It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out
I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless
You're being gaslit. The point is to make you look unproductive.
The untrained temp workers using AI to do the entirety of their jobs aren't producing code of professional quality, it doesn't adhere to best practices or security unless you monitor that shit like a hawk but if you're still engineering for quality then AI is not the first train you've missed.
They will get code into production quicker and cheaper than you through brute force iteration. Nothing else matters. Best practices went the way of the rest of the social contract the instant feigned competence became cheaper.
Even my podunk employer has AI metrics. You won't escape it. AI will eventually gatekeep all expertise and the future employee becomes just a disposable meat interface (technician) running around doing whatever SHODAN tells them to.
My "agentic" experience is mostly Aider, working across a Golang webapp codebase. I've mostly used Gemini (whatever model Aider chooses to use at the moment).
Most of my experience has been similar to yours. But yesterday, out of the blue, it spit out a commit that I accepted almost verbatim (just added some line breaks and stuff). I was actually really surprised: not only it followed the existing codebase conventions and variable naming style, but also introduced a couple of patterns that I haven't thought of (and I liked).
But it also charged me $2 for the privilege :)
(On a related note, Gemini API has become noticeably more expensive compared to, say, a month ago.)
I find that with Aider managing context (what files you add to it) can make all the difference.
AI coding tools aren't equally effective across all software domains or languages. They're going to be the "best" (relative to their own ability distribution) in the "fat middle" of software engineering where they have the most training data. Popular tasks in popular languages and popular libraries (web dev in React, for example). You're probably out of luck if your task is writing netcode for a game engine, for instance.
But isn't prompting and iterating another way of instructing the computer to do what you want? Perhaps we could view it as a step up in the level of abstraction we work at.
We had similar arguments when high-level languages were introduced. Experienced programmers of that era maintained that they could write better programs in assembly language than in COBOL/FORTRAN/PL-I/Pascal etc. Yet even today we still need core portions of code written in assembler, but not much.
I have a working theory that it's mostly bad programmers who are achieving massive productivity gains. Really good programmers will probably have trouble getting the LLM tools to perform as well as their normal level of output.
And the irony is that those of us using AI to amplify our output to produce at exponential speeds feel like your comments are gaslighting us instead! Ive never seen such an outright divide in practitioners of a technology in terms of perception and outcomes. I got into LLMs super early, using them daily since 2022; so that may have bolstered the way I’ve augmented my approaches and tooling. Now almost everything I build uses AI at runtime to generate better tools for my AI to generate tools at runtime.
> As I have kept up conversation with my developer friends, it has become essentially the norm, and everyone is being pressed to find greater productivity using AI coding tools.
What a weird alternate universe it is that I live in. My managers are somewhat skeptical of AI workflows and keep throwing up roadblocks to deeper and more coordinated use among my colleagues. Probably because there is so much churn, and it’s difficult to replicate the practice from one engineer to another. Some of my colleagues are very resistant to using AI. I use it quite extensively, but rate limits mean that there are occasions when I must pick up where the machine leaves off.
I agree. Using AI for development is addictive, once you start you cannot stop. It has harmed me, stunting my skills; just like the lack of exercise leads to physical decline, my coding muscles atrophied due to the lack of practice this dependency caused. And since tech is my only real strength, and I'm not what you'd call "successful", feeling stuck at the one thing I'm supposed to be good at makes me feel useless, makes me feel like a burden. And the worst thing? I'm unable to get back on track. I've tried getting back to coding manually, but I simply can't do it anymore. Last time I programmed anything without using AI was in 2023, on a Scratch project, simply because AI could not write Scratch blocks.
But at least I can now ship (shitty) code a lot faster.
Letting AI generate code instead of writing it yourself as a software developer is basically the same as if a painter would generate a painting instead of doing the art himself.
Mouvelie|7 months ago
Thanks. The author touched something there, close to a truth (or deep belief I got ?) about our life, something about the journey mattering more than the destination...
NathanKP|7 months ago
In my opinion, having spent about a year and half working on various coding projects using AI, there are phases to the AI coding lifecycle.
1) Coding projects start out like infants: you need to write a lot of code by hand at first to set the right template and patterns you want the AI to follow going forward.
2) Coding projects continue to develop kind of like garden beds: you have to guide the structure and provide the right "nutrients" for the project, so that the AI can continue to add additional features based on what you have supplied to it.
3) Coding projects mature kind of like children growing up to become adults. A well configured AI agent, starting from a clean, structured code repo, might be mostly autonomous, but just like your adult kid might still need to phone home to Mom and Dad to ask for advice or help, you as the "parent" of the project are still going to be involved when the AI gets stuck and needs help.
Personally, while I can get some joy and satisfaction from manually typing lines of code, most of those lines of code are things I've typed literally hundreds of times over my decades long journey as a developer. There isn't as much joy in typing out the same things again and again, but there is joy in the longer term steering and shaping of a project so that it stays sane, clean, and scalable. I get a similar same sense of joy out of gently steering AI towards success in my projects, that I get from gently steering my own child towards success. There is something incredible about providing the right environment and the right pushes in the right direction, and then seeing something grow and develop mostly on it's own (but with your support backing it up).
toomuchtodo|7 months ago
https://youtu.be/u6XAPnuFjJc, referenced often here: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
ryanrasti|7 months ago
Engineering used to be my go-to to enter flow state. Now, I spend a few minutes thinking about what I want and then a lot of time babysitting Claude code -- similar to experiences here.
Has anyone found a way to make the "manager" part feel as engaging and creative as the "craftsman" part used to?
bitwize|7 months ago
fithisux|7 months ago
https://medium.com/@kenichisasagawa/the-reason-behind-develo...
Macha|7 months ago
My impression is that artists are even more hostile than the most AI-skeptic of software engineers. In large part, this is likely because the economic argument doesn't hold much sway. For the large majority of artists, it's hard for them to make money with art as is, the bottleneck is not the volume of art they can produce. There's a much clearer path to turning "more code" into "more money", even if it's still not direct.
GianFabien|7 months ago
The industrial scale painting robots work well for painting cars coming off an assembly line, but not for landscapes nor portraits.
Automation (not just AI, but in general) works well for highly structured, repetitious work but not for creative expression.
jaredcwhite|7 months ago
bluefirebrand|7 months ago
I'm often accused of letting my skepticism hold me back from really trying it properly, and maybe that's true. I certainly could not imagine going months without writing any code, letting the AI just generate it while I prompt
My work is pushing these tools hard and it is taking a huge toll on me. I'm constantly hearing how life changing this is, but I cannot replicate it no matter what I do
I'm either just not "getting it", or I'm too much of a control freak, or everyone else is just better than I am, or something. It's been miserable. I feel like I'm either extremely unskilled or everyone else is gaslighting me, basically nowhere in between
I have not once had an LLM generate code that I could accept. Not one time! Every single time I try to use the LLM to speed me up, I get code I have to heavily modify to correct. Sometimes it won't even run!
The advice is to iterate, but that makes no sense to me! I would easily spend more time iterating with the LLM than just writing the code myself!
It's been extremely demoralizing. I've never been unhappier in my career. I don't know what to do, I feel I'm falling behind and being singled out
I probably need to change employers to get away from AI usage metrics at this point, but it feels like it's everyone everywhere guzzling the AI hype. It feels hopeless
clown_strike|7 months ago
The untrained temp workers using AI to do the entirety of their jobs aren't producing code of professional quality, it doesn't adhere to best practices or security unless you monitor that shit like a hawk but if you're still engineering for quality then AI is not the first train you've missed.
They will get code into production quicker and cheaper than you through brute force iteration. Nothing else matters. Best practices went the way of the rest of the social contract the instant feigned competence became cheaper.
Even my podunk employer has AI metrics. You won't escape it. AI will eventually gatekeep all expertise and the future employee becomes just a disposable meat interface (technician) running around doing whatever SHODAN tells them to.
geoka9|7 months ago
Most of my experience has been similar to yours. But yesterday, out of the blue, it spit out a commit that I accepted almost verbatim (just added some line breaks and stuff). I was actually really surprised: not only it followed the existing codebase conventions and variable naming style, but also introduced a couple of patterns that I haven't thought of (and I liked).
But it also charged me $2 for the privilege :) (On a related note, Gemini API has become noticeably more expensive compared to, say, a month ago.)
I find that with Aider managing context (what files you add to it) can make all the difference.
ssutch3|7 months ago
GianFabien|7 months ago
But isn't prompting and iterating another way of instructing the computer to do what you want? Perhaps we could view it as a step up in the level of abstraction we work at.
We had similar arguments when high-level languages were introduced. Experienced programmers of that era maintained that they could write better programs in assembly language than in COBOL/FORTRAN/PL-I/Pascal etc. Yet even today we still need core portions of code written in assembler, but not much.
20after4|7 months ago
This could be cope but I don't think it is.
unknown|7 months ago
[deleted]
cheevly|7 months ago
breckenedge|7 months ago
What a weird alternate universe it is that I live in. My managers are somewhat skeptical of AI workflows and keep throwing up roadblocks to deeper and more coordinated use among my colleagues. Probably because there is so much churn, and it’s difficult to replicate the practice from one engineer to another. Some of my colleagues are very resistant to using AI. I use it quite extensively, but rate limits mean that there are occasions when I must pick up where the machine leaves off.
Jotalea|7 months ago
But at least I can now ship (shitty) code a lot faster.
randomNumber7|7 months ago
But to get there it might be a good move to code for yourself (and read books).
Then on the other hand coding will not be a fun job anymore...
unknown|7 months ago
[deleted]
cryptos|7 months ago
senko|7 months ago
You mean have a handheld device with a button they can press, which would instantly generate a painting of what it was pointed at?
Agreed, that could never be art.
ribcage|7 months ago
[deleted]