I've had the absolute opposite experience, AI has brought back a lot of the joy of programming and building products for me.
I've been using Cursor extensively these past few months, for anything ranging from scaffolding to complex UIs. The trick, I've found, is to treat the AI like I would work with a junior engineer, giving in concrete detailed tasks to accomplish, breaking the problem down myself into manageable chunks. Here are two examples of little word games I've made, each of them took all in all a couple of days to ideate, design and build.
https://7x7.game
You're given a grid and you need to make as many words as possible, you can only use the letters in the bottom row. There's complex state management, undo, persistent stats, light/dark modes, animations. About 80-90% of the code was generated and then manually tweaked/refactored.
https://vwls.game
Given 4 consonants, you have to generate as many words as possible. This is heavily inspired by Spelling Bee, but with a slightly different game mechanic. One of the challenges was that not all "valid" words are fun, there are a lot of obscure/technical/obsolete words in the dictionary, I used Claude's batch API to filter down the dictionary to words that are commonly known. I then used cursor to generate the code for the UI, with some manual refactoring.
In both cases, having the AI generate the code enabled me to focus on designing the games, both visually and from an interaction perspective. I also chose to manually code some parts myself, because these were fun.
At the end of the day, tools are tools, you can use them however you like, you just need to figure out how they fit in your workflow.
That's because you don't find joy in programming, you find joy in game design. Actually, game design can be automated too just the interfaces are not as advanced as cursor. We'll get there soon.
Exactly. AI lets me focus on the most interesting part of programming. Coming up with how I want to solve the problem, not wasting time searching docs to find out if a particular function will do what I want and many other tasks I didn't even know were not my favourite before.
At least with a Junior Engineer they can learn and grow from your feedback, becoming a more useful member of the team, the generative model on the other hand will not.
> In both cases, having the AI generate the code enabled me to focus on designing the games, both visually and from an interaction perspective. I also chose to manually code some parts myself, because these were fun.
I don't agree. Copilot/etc is kind of worthless for me, it creates so many issues that I've never bothered to work with it.
AI is awesome for solving issues, asking it questions about code, asking for possible solutions. But maybe I'm just fast at writing code that actually solves the problem, so I don't need an AI to code for me.
Same, I spend way longer debugging and editing AI code than it would have taken me to just write it myself. I don't consider myself a fast typer, for me it's purely due to the inaccuracy of the results.
Cody's autocomplete used to work really well for me. Then they switched to DeepSeek. Now I regularly get suggestions that are irrelevant, incomplete, and contain syntax errors.
I'm not sure what it's like these days but I had a similar experience with Copilot a while back.
I wonder if good autocomplete is just too expensive.
If I were a trained professional software engineer who found joy in writing tests and TDD, maybe I'd feel differently, but I write software to help with basic scientific analysis, and ChatGPT has been an absolute game changer for writing tests.
I personally find writing tests to be soul-crushing, boring, work. I never really learned it properly, and when I have a well-documented function, CGPT typically does a decent job making a rough draft. I often have to work on the test function, fix some things, but the final product is way better than the PoS I would have put together: my guess is it has saved me hundreds of hours. I have developed a decent understanding of fixtures, mocking, sharing fixtures across modules, etc, all with the help of ChatGPT. It "understands" my project and how it is organized, and makes suggestions based on this understanding. Yes, it sometimes gets stuck in local minima and I have to kick it out, which can be frustrating. But even that is a learning process, as I often go to SO or other people's code bases to find good examples, and feed them to ChatGPT to get it unstuck.
It's like the ultimate rubber duck paired programming partner. I tell it what I'm working on, and that's intrinsically helpful. But the rubber duck has really good feedback, because it has read the entire internet.
It's made writing tests for my code fun, for the first time ever.
The people I know personally who refuse to use CGPT are typically very good software developers, somewhat arrogant and have a chip on their shoulders, and honestly I think in 20 years we'll look back at them like people who thought the internet was a passing phase in the mid 1990s. I also think many of them don't understand how LLMs work, and how powerful they can be when prompted correctly
Tests are usually soul crushing and boring for the same reason PHP was in 1999 - a complete lack of structure, tooling and separation of concerns made it tedious and difficult.
I find it interesting that when people describe to me how they use LLMs to write code it's either short throwaway scripts or to write the kind of code that would make me retch (e.g. tests stuffed full of horrible mocks, spaghetti boilerplate).
The difference with the internet is that it was not the people on the inside calling it useless (engineers). It was the business people and others who had no understanding of the technology and possibilities.
In this case it is the opposite, the best ML/software engineers today think this is a passing phase. It's the general population and business people who are claiming it to be revolutionary.
For me, it’s quite the opposite—it brought back the joy of programming.
There are thousands of weather apps in the App Store, but none display rain data exactly the way I’d like to see it. That’s why I’ve long considered writing my own home screen widget to show it exactly as I want.
I hadn’t developed iPhone apps in a few years, so I had no experience with SwiftUI, the Swift Graph framework, or creating widgets. Just two years ago, building an app with a widget from scratch would have taken me a week — to read tutorials, navigate the necessary documentation, get started and solve my beginner bugs. Because of that time investment, I always hesitated to even begin.
Now, I’ve created exactly what I wanted in a single afternoon after work, with the help of AI. To be honest, GitHub Copilot isn’t very helpful for this, though it does speed up repetitive typing. However, using ChatGPT to scaffold the graph code—with me tweaking the parameters—made the process much faster. Since they added search functionality, there’s minimal "hallucination" of APIs, allowing for quick iterations and bringing back that “joy of programming” feeling.
I assumed from the actual title that there wouldn't be any content worth reading and stopped there. It's funny to call the false HN title "clickbait" but I did click that one and wouldn't have clicked the other.
When you program for a living, you want the fastest path to creating the best code conforming to your metric of "best." Copilot may or may not be able to get you there, YMMV as they say.
When you program for a hobby, you oftentimes seek to enjoy the route as much or more than reaching the destination. Copilot would be a distraction and an annoyance in this case - unless you're genuinely stuck and then you can use Copilot as a mentor.
It all depends on your context and what you're trying to do.
I've taken the simple solution: if I want to enjoy programming for programming's sake, I turn copilot off. If I want to be careful and understand the problem and its solution in detail, I turn copilot off. If I simply want to get a toy project done and don't care at all about the implementation process, I might leave it on.
I've had an absolutely magical experience with copilot though. I honestly find it a bit strange when others say it has just been bad for them
Copilot does very well when I'm solving the same problem I've solved in the past with different parameters. Parse a CAN network packet (8 bytes so they do weird things like 6 bit counters with 2 bytes in byte 3 and the rest in byte 4) copilot can write that and the tests quickly - we have hundreds of different CAN packets we parse so there is a lot of example code to look at. Everything is just different enough to look like boilerplate while not actually being boilerplate. However when I'm trying to write code that isn't a variation of something I've done many times before copilot is not helpful. It can't complete as much and what does complete is wrong often for style reasons (it would be nice if the function it wants to call existed but it doesn't, or does takes some other parameter that it doesn't know)
This is my big fear about pervasive use of AI. I'm afraid that companies, policy makers, regulators, etc. all start letting AI make important decisions without any human understanding of the reasoning behind the decision and the human puppets hiding behind it with no accountability.
I imagine scenarios where AI could be given complete authority to decide who is hired/fired, who gets medical care, who gets food, who gets utilities (water/electricity/natural gas) to their homes, who gets disaster relief, etc. Quite frightening when you think about it. If AI decided to cancel you (and it had this level of authority) your very existence would be in danger.
Online comments are about to die. First, we won't be able to tell who is real anymore. People will profit by running huge botnets for advertising and political manipulation. (Inb4 someone says this already happens: AI will keep lowering the bar to enter and evade detection). But a knock on effect is there will be a market for farming fake accounts for later sale to said manipulators.
That’s just because people have been so indoctrinated by capitalism that they only find joy if they’re being worked to the bone. They have forgotten the joy of not doing anything productive.
Sounds about right. Let's throw more money, power and natural resources at it and see if the scale tips the other way! If not well nothing of value will have been lost right? What's shortening humanity's lifetime in comparison to the potential productivity gains!
There's something very "late stage capitalism" about pouring torrents of capital into tools that can replace human ingenuity, artistry and creativity while starving stuff like infrastructure, space travel, etc.
When we're building something, we don't have all the specs upfront (unless it's simple). I'm learning and adapting as I write more of the project, and at some points I may backtrack or start from scratch. For projects where you have the whole code upfront, I guess you could pass that to an LLM (maybe).
The way I found most success using LLMs is as a partner to ping-pong ideas, to come up with code design, algorithms, and data structures that would fit a particular scenario. Then I'm ignoring its code and writing it to fit the project. The trick is to use the randomness combined with the vast array of information it holds to your advantage - like a supercharged Google.
Regarding my joy of programming, for me it's not even close. I get my joy from the project as a whole, not from snippets of code sprinkled around (sometimes I wish it could - I have hundreds of projects I would like to tackle but they're not worth my time). The only thing I worry about is that the next version would not be accessible to the public or they would cost exorbitant amounts.
edit: for the way I'm using LLMs I found the approach taken by Zed editor to be the best, really recommend it's buffer, easy to copy-p, modify and search (it would be nice to also have divergence from a chat, hopefully in the future)
This is one of the reasons why I don't use genAI for programming purposes. It increases the need to review and correct code I didn't write, which increases the amount of work that I don't enjoy doing.
My experience is that like so much else there's an expiry date on the joyful coding.
I gave it another chance with AI but AI is too incompetent, it's more of a creative intern that does badly speed reports than a competent replacement for painstakingly reading documentations and googling.
Interesting example. As programming languages and tooling such as static analysis become more and more advanced I would think memory leaks or mismanagement of memory is going to become a thing of the past. So I would argue that one way or another this was bound to happen.
Damn... I wish I could say this wasn't true. Been trying to lie and say it hasn't but it absolutely has made programming less enjoyable by far... I been trying to convince myself other wise but I am just lying to myself.
Is this tongue-in-cheek? It seems like it is, but I can't tell for sure. Disliking LLMs for coding because they're too helpful is an amusing concept either way.
friggeri|1 year ago
I've been using Cursor extensively these past few months, for anything ranging from scaffolding to complex UIs. The trick, I've found, is to treat the AI like I would work with a junior engineer, giving in concrete detailed tasks to accomplish, breaking the problem down myself into manageable chunks. Here are two examples of little word games I've made, each of them took all in all a couple of days to ideate, design and build.
https://7x7.game You're given a grid and you need to make as many words as possible, you can only use the letters in the bottom row. There's complex state management, undo, persistent stats, light/dark modes, animations. About 80-90% of the code was generated and then manually tweaked/refactored.
https://vwls.game Given 4 consonants, you have to generate as many words as possible. This is heavily inspired by Spelling Bee, but with a slightly different game mechanic. One of the challenges was that not all "valid" words are fun, there are a lot of obscure/technical/obsolete words in the dictionary, I used Claude's batch API to filter down the dictionary to words that are commonly known. I then used cursor to generate the code for the UI, with some manual refactoring.
In both cases, having the AI generate the code enabled me to focus on designing the games, both visually and from an interaction perspective. I also chose to manually code some parts myself, because these were fun.
At the end of the day, tools are tools, you can use them however you like, you just need to figure out how they fit in your workflow.
KaoruAoiShiho|1 year ago
Roark66|1 year ago
unknown|1 year ago
[deleted]
daemin|1 year ago
planb|1 year ago
nunez|1 year ago
…so not programming?
andix|1 year ago
AI is awesome for solving issues, asking it questions about code, asking for possible solutions. But maybe I'm just fast at writing code that actually solves the problem, so I don't need an AI to code for me.
miningape|1 year ago
dsissitka|1 year ago
Cody's autocomplete used to work really well for me. Then they switched to DeepSeek. Now I regularly get suggestions that are irrelevant, incomplete, and contain syntax errors.
I'm not sure what it's like these days but I had a similar experience with Copilot a while back.
I wonder if good autocomplete is just too expensive.
lackoftactics|1 year ago
Other than that, having chat with o1 and sonnet inside the editor is pretty good ngl
KaoruAoiShiho|1 year ago
neuronet|1 year ago
I personally find writing tests to be soul-crushing, boring, work. I never really learned it properly, and when I have a well-documented function, CGPT typically does a decent job making a rough draft. I often have to work on the test function, fix some things, but the final product is way better than the PoS I would have put together: my guess is it has saved me hundreds of hours. I have developed a decent understanding of fixtures, mocking, sharing fixtures across modules, etc, all with the help of ChatGPT. It "understands" my project and how it is organized, and makes suggestions based on this understanding. Yes, it sometimes gets stuck in local minima and I have to kick it out, which can be frustrating. But even that is a learning process, as I often go to SO or other people's code bases to find good examples, and feed them to ChatGPT to get it unstuck.
It's like the ultimate rubber duck paired programming partner. I tell it what I'm working on, and that's intrinsically helpful. But the rubber duck has really good feedback, because it has read the entire internet.
It's made writing tests for my code fun, for the first time ever.
The people I know personally who refuse to use CGPT are typically very good software developers, somewhat arrogant and have a chip on their shoulders, and honestly I think in 20 years we'll look back at them like people who thought the internet was a passing phase in the mid 1990s. I also think many of them don't understand how LLMs work, and how powerful they can be when prompted correctly
hitchstory|1 year ago
I find it interesting that when people describe to me how they use LLMs to write code it's either short throwaway scripts or to write the kind of code that would make me retch (e.g. tests stuffed full of horrible mocks, spaghetti boilerplate).
miningape|1 year ago
In this case it is the opposite, the best ML/software engineers today think this is a passing phase. It's the general population and business people who are claiming it to be revolutionary.
Only time will tell though
planb|1 year ago
There are thousands of weather apps in the App Store, but none display rain data exactly the way I’d like to see it. That’s why I’ve long considered writing my own home screen widget to show it exactly as I want.
I hadn’t developed iPhone apps in a few years, so I had no experience with SwiftUI, the Swift Graph framework, or creating widgets. Just two years ago, building an app with a widget from scratch would have taken me a week — to read tutorials, navigate the necessary documentation, get started and solve my beginner bugs. Because of that time investment, I always hesitated to even begin.
Now, I’ve created exactly what I wanted in a single afternoon after work, with the help of AI. To be honest, GitHub Copilot isn’t very helpful for this, though it does speed up repetitive typing. However, using ChatGPT to scaffold the graph code—with me tweaking the parameters—made the process much faster. Since they added search functionality, there’s minimal "hallucination" of APIs, allowing for quick iterations and bringing back that “joy of programming” feeling.
Freak_NL|1 year ago
ergonaught|1 year ago
gosub100|1 year ago
taylodl|1 year ago
When you program for a hobby, you oftentimes seek to enjoy the route as much or more than reaching the destination. Copilot would be a distraction and an annoyance in this case - unless you're genuinely stuck and then you can use Copilot as a mentor.
It all depends on your context and what you're trying to do.
bowsamic|1 year ago
I've had an absolutely magical experience with copilot though. I honestly find it a bit strange when others say it has just been bad for them
bluGill|1 year ago
KaoruAoiShiho|1 year ago
neverartful|1 year ago
I imagine scenarios where AI could be given complete authority to decide who is hired/fired, who gets medical care, who gets food, who gets utilities (water/electricity/natural gas) to their homes, who gets disaster relief, etc. Quite frightening when you think about it. If AI decided to cancel you (and it had this level of authority) your very existence would be in danger.
gosub100|1 year ago
deadbabe|1 year ago
pluc|1 year ago
pydry|1 year ago
bowsamic|1 year ago
[deleted]
taosx|1 year ago
The way I found most success using LLMs is as a partner to ping-pong ideas, to come up with code design, algorithms, and data structures that would fit a particular scenario. Then I'm ignoring its code and writing it to fit the project. The trick is to use the randomness combined with the vast array of information it holds to your advantage - like a supercharged Google.
Regarding my joy of programming, for me it's not even close. I get my joy from the project as a whole, not from snippets of code sprinkled around (sometimes I wish it could - I have hundreds of projects I would like to tackle but they're not worth my time). The only thing I worry about is that the next version would not be accessible to the public or they would cost exorbitant amounts.
edit: for the way I'm using LLMs I found the approach taken by Zed editor to be the best, really recommend it's buffer, easy to copy-p, modify and search (it would be nice to also have divergence from a chat, hopefully in the future)
JohnFen|1 year ago
tylerchilds|1 year ago
ai is a junior engineer you as a senior engineer can coach
the end:
the ai is a senior engineer with a half finished problem you can polish as a junior engineer
justlikereddit|1 year ago
My experience is that like so much else there's an expiry date on the joyful coding.
I gave it another chance with AI but AI is too incompetent, it's more of a creative intern that does badly speed reports than a competent replacement for painstakingly reading documentations and googling.
k3vinw|1 year ago
jdefr89|1 year ago
ksymph|1 year ago
djaouen|1 year ago
unknown|1 year ago
[deleted]
pwillia7|1 year ago
tithos|1 year ago
lackoftactics|1 year ago
CodeNest|1 year ago
[deleted]
unknown|1 year ago
[deleted]