I recently needed to create a Dockerfile which I have done many times before. Instead of opening the documentation I used ChatGPT.
It took 3 times longer than if I had done it by hand because I just kept copy and pasting each error back to ChatGPT ending up in a dead end having to start over from scratch. As soon as you have a requirement that is not the 90% of cases you can quickly end up in a dead end especialy if you have more than one such requirement.
Once the LLM starts hallucinating you can easily get into corner where you will need to reset and start from scratch.
> As soon as you have a requirement that is not the 90% of cases you can quickly end up in a dead end
You framed that as a criticism of ChatGPT, I think that is indeed the point the OP is making. IMO if this is something you used to do you could get a nice starting point with a single prompt and then go from there, then that would take 3 times less compared to reading the documentation from get-go.
I haven't written any C++ in ages so I used a local Ollama3.2 to do a simple class and struct. The AI wanted me to use the `friend` keyword on class functions that weren't being accessed from anywhere else... I did correct itself after I asked but it took some intuition and googling to get there.
This is basically the modern "google-driven" development experience as well.
The copy-pasting from SO trope has existed for some time, but it's much worse in the last 5-7 years with the increase of blogspam/LLM generated content on Medium that's SEO'd to the top and effectively just rewriting the "getting started" page of some language/framework/tool/etc. for resume boosting points.
It seems like a lot of the models in ChatGPT and Copilot were trained on that content and in turn tends to produce a lot of dead end solutions for anything that isn't the 90% cases and often leads to more pain than reading documentation and building a solution through iteration/experimentation.
yeah, I’ve had many moments in my AI-driven development cycles looping with the LLM in a dead end, eventually giving up and falling back to regular old programming.
I think LLMs are great for reducing the friction with just starting a task, but they end up being more of a nuisance when you need to dig into more nuanced problems.
Does it? I've had some months-long gaps where I hadn't written a single line of code and I always felt like I could get back up to speed within a few days. I suppose that depends on where you expect your level to be, are we talking about $300k+ positions at big tech or your average software job?
Re-learning a large codebase would take longer, of course, but I'm talking about just getting up to speed like if I was starting a new project.
How? What I see are non-programmers who can create programs, but still have hardly any ability to understand them, debug them or even create them on their own without AI-tools. This is similar, but worse, to what we used to call Stack Overflow-Programmer, or coding bootcamp-kids.
Seems like everyone disagrees with you. Instead of reply to each of them individually I will reply here:
I used to be a non-programmer (although extremely technically advanced). I was very good with the terminal because I always played around with ADB and other things. I even wrote Windows batch scripts (but looking at them now is embarrassing. I used a lot of repetition instead of loops).
When ChatGPT became available, and my friend taught me how to use it to write Python scripts, I went all in. I didn't write a word of my own code. I just copy /pasted Chatgpt's output into notepad++. It was really annoying sometimes to get ChatGPT to change a simple thing. It hallucinated often and "fixed" stupid things I didn't ask for help with. I used all these scripts for various personal things. For example, it made me a GUI to edit the metadata of every .mp3 in a folder).
After several months of exposure to code, I knew the basic syntax and tried to code myself when I could, and Stack Overflow was a big help. I still had very basic skills and didnt even know what a `class` was.
Fast forward to now, I consider myself extremely good at Python, and decent in other languages. I now use classes and dataclasses all the time. I always add type hints to my code. I follow Python PEPs and try to design my code to be short, maintainable, and to the point. If a library lacks documentation, I just read the source code. I started using an IDE, and the intellisense is really a step up from using notepad++.
I still use copilot, but I refuse to paste large code blocks from any llm. It makes the code so much harder to maintain. If I really do like the code chatgpt gave me when I get stuck, I write it myself, instead of copy/paste.
I don't know if everyone has the same experience as me, but I would certainly say that ChatGPT helped me become a successful programmer.
Until an obstacle hits them and AI goes into a "reasoning" loop unable to find a solution and they don't know how to "nudge" it to the right path.
I've seen in some companies, relatively trivial tickets are bounced from developer to developer for months and management believing these are hard to do.
I would say "expands the class of simple applications one can build without expertise" in the same way spreadsheet software enabled simple financial programs, website builders meant you didn't need to do HTML/JS nitty-gritty, etc. OTOH these technologies also enable financial blunders (I am personally responsible for at least one really nasty Excel bug), horrible website design, etc etc.
Edit: Arvind Naranyan of AI Snake Oil uses LLMs to create "one-time-use" apps, mostly little games or educational things for his kids. As an AI skeptic I think this is cool and should be celebrated. But there is a serious downside when using it blindly for real work.
I have never liked the term "can program" anyway, programming is the easy part....
It’s also creating a generation on management, clients and non-technical people who send me outputs from Claude and suggest I implement what it said. Needless to say, this makes my life harder.
Programming is a mindset, more than just the ability to type (or copy/paste, nowadays) something that the compiler accepts. So no, they can't program. If something goes wrong they have no idea why it's wrong. They also have no idea why it's right, when it just happens to be.
Is that actually happening? I use o1 a lot for brainstorming etc, but I don't think I could build a program of any significant size without understanding how to program myself so to guide o1 and stitch it's outputs together. It's also not at all uncommon for me to have to do stuff completely myself, it doesn't have great insights into deep problems for instance.
Of course not. The ultimate goal of AI is to get rid of the developers altogether and reduce costs (more profits!). You think BigTech are spending hundreds of billions just to make developers more productive while retaining them? That's not a profitable strategy.
The copy/paste with LLM interactive stage is just a transitional stage as the LLM improves. We'll be past that in 5 years time.
No, AIs won't replace all developers -- you still need people doing the systems designs that the AI can implement code for. But it could easily reduce their numbers by some large percentage (50%? 80%?).
Edit: I would no longer advise my kids to get a degree in Computer Science.
> We’re becoming 10x dependent on AI. There’s a difference.
This is true, but I also don't need 10x the ability to write cursive any more. I used to have great hand writing, now it's a very poor. That said, my ability to communicate has only increased throughout my life. Similarly, my ability to spell has probably diminished with auto-correct.
Yes, you will become dependent on AI if you're a developer, because those who use AI will take your job (if you don't) and be significantly more productive than you. It sucks, but that's reality. My grandfather started programing on physical cards, he knew a lot of stuff I had no idea about. That said, I'd be able to run circles around him in the programming (and literal) sense today.
The question is really what skills do you need to know to still execute the job with AI effectively. I'd argue that doesn't change between AI and not having AI.
As a seasoned engineer, I spend probably 60% of my time designing, 30% managing/mentoring/reviewing and 10% coding. That probably wont change much with AI. I'll just be able to get a lot more done with the coding and design in the same timeframe. My skillsets likely will remain the same, though writing small functions may diminish slightly.
>AI will take your job (if you don't) and be significantly more productive than you
Physically typing out the code is one of the last steps. Before that can happen the problem which is to be solved must be identified. An appropriate approach to the problem must be formulated.
Look around at some of the deranged hype in this AI cycle or look back on previous hype cycles. Many of the proposed "solutions" do not solve the problems of consumers. Solutions in search of a problem are as abundant as novel tech. These things dominate the zeitgeist. Perhaps they help inspire us, but they are not guaranteed to solve immediate problems.
There's an important distinction between problem solving skills, tools and the most probable token. To innovate is to do something new. AI, on the other hand, does the most probable thing.
Yes but we already have illiterate programmers and there are companies who hire them, and the majority that recognises what they are and don’t. I’m talking about the code copy-pasters who cannot really reason about the problems they’re given nor understand how the code they pasted works. My point is that this is not a new phenomenon and I don’t think it will fool those that weren’t fooled by it before.
There are a lot of questions here that also apply to teaching CS. Do we want our graduates to still know the old way? Perhaps for job interviews and days when GPT is down and for problems it can't solve yet.
The "no AI without understanding the solution" rule is a start here.
I've ended up only using inline generative AI completions to do programming when I'm either on a strict deadline or making a demo for a conference/meetup talk. Otherwise I limit myself to pasting in/out of a chat window. This helps balance things out so that I can meet deadlines when I need to, both otherwise retain my programming/SRE skill.
That being said, one of my top google searches are things like "kubernetes mount configmap as files" because I don't quite do them often enough to have them totally internalized yet.
I also use inline code generation, but it just seems like a more intelligent version of the code-aware autocomplete my IDE already provides. I love it, but it doesn’t feel like I’m atrophying because I’m still “driving”. It just saves a lot of keystrokes
I think there's a weird disconnect between the author's experience (the automated tools are good but they induce dependency) and my own experience (the tools are bad, produces code that doesn't type-check, produces explanations that don't make sense, or suggest back to me the half-broken code I already have).
I continue to believe that these tools would be more reliable if rather than merely doing next-token-prediction on code, we trained on the co-occurrence of human-written code, resulting IR, and execution traces, so the model must understand at least one level deeper of what the code does. If the prompt is not just a prose description but e.g. a skeleton of a trace, or a set of inputs/outputs/observable side effects, then simultaneously our generative tools only allow outputs which meet the constraints, with the natural and reasonable cost that the engineer needs to think a little more deeply about what they want. I think if done right, this could make many of us better engineers.
I have never used any AI for coding. It would feel like using side wheels on my bike. I have tested ChatGPT at one point but I did not even try coding stuff.
There is a difference: when you code in higher level languages most of the time (except some domains where you still need to understand assembly) you don't need to understand how generated assembly works. This is not the case with AI generated code. You still need to be able understand, modify and validate the generated code.
Tangent: I remember hearing of a sci-fi story where a city has been automated for generations and stuff is starting to fall apart while nobody knows how to fix it. Does anyone know what the name is? I have been unable to find it for a while.
Eventually won't AI be trained on a massive corpus of source code and be tuned to actually program something as good as a lot of the existing bloatware out there that businesses pay out the nose for?
It's all in how you use it. Copy/paste driven development has long been a meme in the programming community.
For me, AI is like pairing with a very knowledgeable and experienced developer that has infinite time, infinite patience, and zero judgements about my stupid questions or knowledge gaps. Together, we tackle roadblocks, dive deep into concepts, and as a result, I accept less compromises in my work. Hell, I've managed to learn and become productive in Vim for the first time in my career with the help of AI.
But is a programmer who uses AI to program the same as
a Typescript programmer who doesn't understand JavaScript well?
Or a Python programmer who doesn't know C?
Or a C programmer who doesn't know assembly code?
Or an assembly coder who doesn't make his own PCBs?
Is AI orchestration just adding another layer of abstraction to the top of the programming stack?
The last guy who traditionally "knew everything" was Erasmus.
The shame is not in outsourcing the skill to, e.g. make a pencil[1]; rather, the shame is not retaining one's major skill. In IT, that is actually thinking.
X to doubt. So far at least, to do anything nontrivial you still have to understand code. Sure it will raise the bar for what someone without such understanding can make, but a lot of that can already be done with code-free tools.
What most people who would call themselves programmers are working on is far beyond what AI can do today, or probably in the foreseeable future.
[+] [-] sschueller|1 year ago|reply
It took 3 times longer than if I had done it by hand because I just kept copy and pasting each error back to ChatGPT ending up in a dead end having to start over from scratch. As soon as you have a requirement that is not the 90% of cases you can quickly end up in a dead end especialy if you have more than one such requirement.
Once the LLM starts hallucinating you can easily get into corner where you will need to reset and start from scratch.
[+] [-] wseqyrku|1 year ago|reply
You framed that as a criticism of ChatGPT, I think that is indeed the point the OP is making. IMO if this is something you used to do you could get a nice starting point with a single prompt and then go from there, then that would take 3 times less compared to reading the documentation from get-go.
[+] [-] TomK32|1 year ago|reply
[+] [-] conor-|1 year ago|reply
It seems like a lot of the models in ChatGPT and Copilot were trained on that content and in turn tends to produce a lot of dead end solutions for anything that isn't the 90% cases and often leads to more pain than reading documentation and building a solution through iteration/experimentation.
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] lgas|1 year ago|reply
> It took 3 times longer than if I had done it by hand because I just kept copy and pasting each error back to ChatGPT
You know that if you already know what you're doing you can tweak the output directly instead of mindlessly pasting it back into ChatGPT, right?
[+] [-] medhir|1 year ago|reply
I think LLMs are great for reducing the friction with just starting a task, but they end up being more of a nuisance when you need to dig into more nuanced problems.
[+] [-] rvnx|1 year ago|reply
Set Cursor in "agent" mode (with Claude as a model).
Ask what you need.
It will automatically test your Docker deployment and configuration.
Then you become like me, a monkey clicking "Accept".
But a happy monkey.
[+] [-] dartos|1 year ago|reply
I spent a year just coasting at a job and not really writing any code.
It was quite difficult to get to my same level after starting my new job.
Now that I’m back there I’m weary of losing my touch again.
[+] [-] dns_snek|1 year ago|reply
Re-learning a large codebase would take longer, of course, but I'm talking about just getting up to speed like if I was starting a new project.
[+] [-] mclau156|1 year ago|reply
[+] [-] slightwinder|1 year ago|reply
[+] [-] dczomb|1 year ago|reply
[+] [-] shlomo_z|1 year ago|reply
I used to be a non-programmer (although extremely technically advanced). I was very good with the terminal because I always played around with ADB and other things. I even wrote Windows batch scripts (but looking at them now is embarrassing. I used a lot of repetition instead of loops).
When ChatGPT became available, and my friend taught me how to use it to write Python scripts, I went all in. I didn't write a word of my own code. I just copy /pasted Chatgpt's output into notepad++. It was really annoying sometimes to get ChatGPT to change a simple thing. It hallucinated often and "fixed" stupid things I didn't ask for help with. I used all these scripts for various personal things. For example, it made me a GUI to edit the metadata of every .mp3 in a folder).
After several months of exposure to code, I knew the basic syntax and tried to code myself when I could, and Stack Overflow was a big help. I still had very basic skills and didnt even know what a `class` was.
Fast forward to now, I consider myself extremely good at Python, and decent in other languages. I now use classes and dataclasses all the time. I always add type hints to my code. I follow Python PEPs and try to design my code to be short, maintainable, and to the point. If a library lacks documentation, I just read the source code. I started using an IDE, and the intellisense is really a step up from using notepad++.
I still use copilot, but I refuse to paste large code blocks from any llm. It makes the code so much harder to maintain. If I really do like the code chatgpt gave me when I get stuck, I write it myself, instead of copy/paste.
I don't know if everyone has the same experience as me, but I would certainly say that ChatGPT helped me become a successful programmer.
[+] [-] varispeed|1 year ago|reply
I've seen in some companies, relatively trivial tickets are bounced from developer to developer for months and management believing these are hard to do.
[+] [-] aithrowawaycomm|1 year ago|reply
Edit: Arvind Naranyan of AI Snake Oil uses LLMs to create "one-time-use" apps, mostly little games or educational things for his kids. As an AI skeptic I think this is cool and should be celebrated. But there is a serious downside when using it blindly for real work.
I have never liked the term "can program" anyway, programming is the easy part....
[+] [-] sksrbWgbfK|1 year ago|reply
[+] [-] aunty_helen|1 year ago|reply
[+] [-] vrighter|1 year ago|reply
[+] [-] ladyprestor|1 year ago|reply
[+] [-] tyho|1 year ago|reply
[+] [-] worthless-trash|1 year ago|reply
[+] [-] aredox|1 year ago|reply
[+] [-] insane_dreamer|1 year ago|reply
[+] [-] PKop|1 year ago|reply
[+] [-] aredox|1 year ago|reply
[+] [-] tmaly|1 year ago|reply
[+] [-] jarsin|1 year ago|reply
I remember all those devs who could only do tutorials using visual tools like VB6 and ASP forms.
Once the requirements went beyond the basics the project always got handed off to a real dev.
[+] [-] insane_dreamer|1 year ago|reply
Of course not. The ultimate goal of AI is to get rid of the developers altogether and reduce costs (more profits!). You think BigTech are spending hundreds of billions just to make developers more productive while retaining them? That's not a profitable strategy.
The copy/paste with LLM interactive stage is just a transitional stage as the LLM improves. We'll be past that in 5 years time.
No, AIs won't replace all developers -- you still need people doing the systems designs that the AI can implement code for. But it could easily reduce their numbers by some large percentage (50%? 80%?).
Edit: I would no longer advise my kids to get a degree in Computer Science.
[+] [-] philomath_mn|1 year ago|reply
Given the rate of improvement up to that point, how much longer do you think this would be the case?
[+] [-] lettergram|1 year ago|reply
> We’re becoming 10x dependent on AI. There’s a difference.
This is true, but I also don't need 10x the ability to write cursive any more. I used to have great hand writing, now it's a very poor. That said, my ability to communicate has only increased throughout my life. Similarly, my ability to spell has probably diminished with auto-correct.
Yes, you will become dependent on AI if you're a developer, because those who use AI will take your job (if you don't) and be significantly more productive than you. It sucks, but that's reality. My grandfather started programing on physical cards, he knew a lot of stuff I had no idea about. That said, I'd be able to run circles around him in the programming (and literal) sense today.
The question is really what skills do you need to know to still execute the job with AI effectively. I'd argue that doesn't change between AI and not having AI.
As a seasoned engineer, I spend probably 60% of my time designing, 30% managing/mentoring/reviewing and 10% coding. That probably wont change much with AI. I'll just be able to get a lot more done with the coding and design in the same timeframe. My skillsets likely will remain the same, though writing small functions may diminish slightly.
[+] [-] palmfacehn|1 year ago|reply
>...and 10% coding.
Disagree here:
>AI will take your job (if you don't) and be significantly more productive than you
Physically typing out the code is one of the last steps. Before that can happen the problem which is to be solved must be identified. An appropriate approach to the problem must be formulated.
Look around at some of the deranged hype in this AI cycle or look back on previous hype cycles. Many of the proposed "solutions" do not solve the problems of consumers. Solutions in search of a problem are as abundant as novel tech. These things dominate the zeitgeist. Perhaps they help inspire us, but they are not guaranteed to solve immediate problems.
There's an important distinction between problem solving skills, tools and the most probable token. To innovate is to do something new. AI, on the other hand, does the most probable thing.
[+] [-] caseyy|1 year ago|reply
[+] [-] __m|1 year ago|reply
[+] [-] red_admiral|1 year ago|reply
The "no AI without understanding the solution" rule is a start here.
[+] [-] xena|1 year ago|reply
That being said, one of my top google searches are things like "kubernetes mount configmap as files" because I don't quite do them often enough to have them totally internalized yet.
[+] [-] parpfish|1 year ago|reply
[+] [-] grajaganDev|1 year ago|reply
[+] [-] abeppu|1 year ago|reply
I continue to believe that these tools would be more reliable if rather than merely doing next-token-prediction on code, we trained on the co-occurrence of human-written code, resulting IR, and execution traces, so the model must understand at least one level deeper of what the code does. If the prompt is not just a prose description but e.g. a skeleton of a trace, or a set of inputs/outputs/observable side effects, then simultaneously our generative tools only allow outputs which meet the constraints, with the natural and reasonable cost that the engineer needs to think a little more deeply about what they want. I think if done right, this could make many of us better engineers.
[+] [-] ivolimmen|1 year ago|reply
[+] [-] KaiserPro|1 year ago|reply
However just like learning to drive, I think there is much scope for forcing the basics once in a while.
The old programming blue shell, as it were.
[+] [-] dartos|1 year ago|reply
Please find one. I highly doubt there are any at all.
AI isn’t translating idea to code. It’s guessing as to what the most likely next code is, given a context.
Compiling is exactly translating code to machine code, but hiding some (or a ton of) technical details.
Those are two wildly different mechanisms to interact with.
Assembly and compiled languages require the same precision of thought, just at different levels of technical detail.
[+] [-] aiono|1 year ago|reply
[+] [-] _Algernon_|1 year ago|reply
[+] [-] sys32768|1 year ago|reply
[+] [-] EmersonL|1 year ago|reply
For me, AI is like pairing with a very knowledgeable and experienced developer that has infinite time, infinite patience, and zero judgements about my stupid questions or knowledge gaps. Together, we tackle roadblocks, dive deep into concepts, and as a result, I accept less compromises in my work. Hell, I've managed to learn and become productive in Vim for the first time in my career with the help of AI.
[+] [-] matthewfelgate|1 year ago|reply
[+] [-] smitty1e|1 year ago|reply
The shame is not in outsourcing the skill to, e.g. make a pencil[1]; rather, the shame is not retaining one's major skill. In IT, that is actually thinking.
[1] https://www.youtube.com/watch?v=67tHtpac5ws
[+] [-] grahamj|1 year ago|reply
What most people who would call themselves programmers are working on is far beyond what AI can do today, or probably in the foreseeable future.