3. evaluating whether the end result works as intended.
I have some news for the author about what 80% of a programmer's job consists of already today.
There is also the issue #4, that "idea guy" types frequently gloss over: If things do not work as intended, find out why they don't do so and work out a way to fix the root cause. It's not impossible that an AI can get good at that, but I haven't seen it so far and it definitely doesn't fit into the standard "write down what I want to have, press button, get results" AI workflows.
Generally, I feel this ignores the inherent value that there is in having an actual understanding of a system.
The current implementations of LLMs are focusing on providing good answers, but the real effort of modern day programming is to ask good questions, of both the system, the data, and the people that supposedly understand the requirements
LLMs might become good at this, but nobody seems to be doing much of this commercially just yet
You make a good point and those kind of senior engineer skills may be the least affected. My post does not argue against that. It argues that writing code manually may quickly become obsolete.
What I am trying to say is that people who see the output of their work as "code" will be replaced just like human computers did. I believe even debugging will be increasingly aided by AI. I do not believe that AI will eliminate the need for system understanding, just to be clear.
Then again, you might argue that writing lines of code and manually debugging issues is exactly what builds your understanding of the system. I agree with that too, I suppose the challenge will be maintaining deep system knowledge as more tasks become automated.
I strongly disagree with this. “Computers” would have not been replaced with the machines that replaced them if those machines routinely produced incorrect results.
One could argue that for applications where correctness is not critical my position does not apply, however this is not the analogy that the article is making.
The trajectory of LLMs "routinely producing incorrect results" is heading downwards as we are getting more advanced reasoning models with test-time compute.
I don't know whether you used some of the more recent models like Claude 3.5 Sonnet and o1. But to me it is very clear where the trajectory is headed. o3 is just around the corner, and o4 is currently in training.
People found value even in a model like GPT 3.5 Turbo, and that thing was really bad. But hey, at least it could write some short scripts and boilerplate code.
You are also comparing mathematical computation - which has only 1 correct solution - with programming, where the solution space is much broader. There are multiple valid solutions. Some are more optimal than others. It is up to the human to evaluate that solution, as I've said in the post. Today, you may even need to fix the LLM's output. But in my experience, I'm finding I need to do this far less often than before.
Wait what? Human programmers produce incorrect results all the time, they are called bugs. If anything, we use automated systems when correctness is important - fuzzers, static analyzers, etc. And the "AI" systems are improving by leaps and bounds every month, look at SWE-Bench [1] for example. It's pretty obvious where this is all going.
> Even he, subconsciously, knows it doesn't pay off to waste cognitive energy on what a machine could do instead.
> It's not laziness, but efficiency.
It's only efficient in the short-term, not long-term. Now the programmer never understood the problem, and that is a problem in the long-run. As any experienced engineer knows - understanding problems is how anyone gets better in the engineering field.
My post did not take the position that understanding problems isn't important.
Using LLMs can even help you understand the problem better. And it can bring you towards the solution faster. Using an LLM to solve a problem does not prevent understanding it. Does using a calculator prevent us from understanding mathematical concepts?
Technical understanding will still be valuable. Typing out code by hand will not.
For this I've always called myself "software plumber".
I quite liked that analogy about excavator because I think along similar lines. Mechanising drudgery work in construction industry enabled us to build superstructures that are hundreds of stories tall. I wonder if LLMs will enable us to do something similar when it comes to software.
Okay, but what will you actually do when your LLM writes code which doesn't actually error but produces incorrect behaviour, and no matter how long you spend refining your prompt or trying different models it can't fix it for you?
Obviously you'll have to debug the code yourself, for which you'll need those programming skills that you claimed weren't relevant any more.
Eventually you'll ask a software engineer, who will probably be paid more than you because "knowing what to build" and "evaluating the end result" are skills more closely related to product management - a difficult and valuable job that just doesn't require the same level of specialisation.
Lots of us have been the engineer here, confused and asking why you took approach X to solve this problem and sheepishly being told "Oh I actually didn't write this code, I don't know how it works".
You are confidently asserting that people can safely skip learning a whole way of thinking, not just some syntax and API specs. Some programmers can be replaced by an LLM, but not most of them.
I agree that you still need programming skills (today, at least). Yet people are using those programming skills less and less, as you can clearly see in the article [1] I referenced.
You are also making the assumption that LLMs won't improve, which I think is shortsighted.
I fully agree with the part about the job becoming more like product management. I would like to cite an excerpt of a post [2] by Andrew Ng, which I found valuable:
Writing software, especially prototypes, is becoming cheaper. This will lead to increased demand for people who can decide what to build. AI Product Management has a bright future! Software is often written by teams that comprise Product Managers (PMs), who decide what to build (such as what features to implement for what users) and Software Developers, who write the code to build the product. Economics shows that when two goods are complements — such as cars (with internal-combustion engines) and gasoline — falling prices in one leads to higher demand for the other. For example, as cars became cheaper, more people bought them, which led to increased demand for gas. Something similar will happen in software. Given a clear specification for what to build, AI is making the building itself much faster and cheaper. This will significantly increase demand for people who can come up with clear specs for valuable things to build. (...) Many companies have an Engineer:PM ratio of, say, 6:1. (The ratio varies widely by company and industry, and anywhere from 4:1 to 10:1 is typical.) As coding becomes more efficient, teams will need more product management work (as well as design work) as a fraction of the total workforce.
To address your last point - no, I am not saying people should skip learning a whole way of thinking. In fact, the skills I outline for the future (supervising AI, evaluating results) all require understanding programming concepts and system thinking. They do not, however, require manual debugging, writing lines of code by hand, a deep understanding of syntax, reading stack traces and googling for answers.
Oh boy, I sure love to see some guy with a blog waxing poetic about the future of AI. The insistence that "most people haven't realized come to terms with it yet", and then proceeding to make a total fucking guess out of his ass.
Reminds me of that one asshole who couldn't find a pen, and wrote a whole NYT article about "the demise of the pen".
Premises seems completely flawed to me. It’s like if we would say because we have calculators, knowing basic arithmetic mentally will no longer be a necessary step to go through. All the more when the these calculators are actually not giving you exacts results, but excel at producing a lot of plausible lookalike approximations based on large corpus of computation samples.
So it seems not only very wrong on what is the hard part in programming, but also misguided about where (current) LLM use can shine, including in programming context.
'Did the computer create a generation of "illiterate mathematicians"?
No, it only freed us to solve higher-level problems.'
I'm not so sure LLM-based code-assistants/writers are directly comparable to the almost perfectly deterministic logic gate chips and monitors/printers and so on that we call "the computer".
Wordpress is likely a better comparison, which allowed more web sites to come into existence with little effort, but as far as I can tell didn't free anyone to "solve higher-level problems". Deterministic code generation might have to some extent, but it's mainly used in enterprise software where the problem domains are quite old, quite stable, like accounting, and the ontologies haven't changed since before "the computer" and the established knowledge professionals (accountants, 'legal') have a strong dominance over discourse, software rules and how compliance is achieved.
While I have been amazed by the progress of LLMs and believe that this ultimately leads to AGI, I still think that LLM aided development is limited to things that are often done and are well documented in the training corpus. Writing code with new libraries or less documented lower level apis still requires you to actually do it. LLMs slowly separate you from your understanding of what your code is doing, and if they don’t meet a minimum threshold of proficiency on a topic, they will slowly and insidiously degrade your codebase until you realize Claude (or whoever else) has built a house of cards.
Except that going from human computers to mechanical computers made us more precise and more accurate at doing calculations, not less.
Modern AI assistants have yet to demonstrate greater level of competence than even an inexperienced human programmer. Anyone telling you otherwise has fallen victim to extrapolation.
Just ask yourself if this AI generated software is what you want managing your bank account or the monitor next to your bed in the hospital. The lack of determinism makes LLMs unsuited for many tasks where precise outcomes are necessary.
> Anyone who tells you otherwise is deluding themselves.
Always good when a blog author ends with an ad hominem not supported by their claims.
It would have been more useful if they had explained how Rice's theorm was falsified, and that HALT/FRAME have been invalidated.
In my experience, for any problem that is non-trivial, LLMs require more knowledge and tacit experience.
Programming has always been more about domain understanding etc...
Things will change, but this is new tooling which requires new knowledge. I have had to start explaining Diaconescu's Theorem to a lot more people as an example.
This article read like one of those annoying LinkedIn “hot takes” that keeps showing up on my feed. I’d suggest sharing this type of content there rather than on HN, please.
"Those are the very skills that are going to become a lot less relevant, for the precise reason that, now, the machine can do those things."
A photocopier could steal books, a VHS recorder could steal movies, a computer could steal digital works.
LLMs now steal pseudo-legally. Any person without morals could have done the exact same thing before LLM laundromats by just using Google search and GitHub.
"Anyone who tells you otherwise is deluding themselves."
While I don’t disagree with the author, I think he misses the larger point that there are two distinct groups of programming code writers, that I’ll call “eager” and “reluctant” until I can think of better terms.
“Eager” programmers find great joy in producing reams of computer code. They prefer to have iron-clad requirements handed to them and then work uninterrupted turning those requirements into a program.
“Reluctant” programmers have problems to solve, and have found that the quickest way to do that is to write code. If solving the problem is faster by not writing code they will do that.
“Computers”, that is, the job formerly performed by humans, favored the “eager” type of person. The people giving work to Computers were “reluctant”, and moved that work to faster tools when they became available.
The issue is that we have armies of “eager” programmers. Can they adapt to become “reluctant” or does the thought of writing less code cause them to resist change?
I sure wish I had access to the crystal ball people like the author and linked-in top-voices have that enable them to utter with unawavering conviction those decrees, while warning that everyone that doesn't agree with them are "deluding themselves."
xg15|1 year ago
[...]
3. evaluating whether the end result works as intended.
I have some news for the author about what 80% of a programmer's job consists of already today.
There is also the issue #4, that "idea guy" types frequently gloss over: If things do not work as intended, find out why they don't do so and work out a way to fix the root cause. It's not impossible that an AI can get good at that, but I haven't seen it so far and it definitely doesn't fit into the standard "write down what I want to have, press button, get results" AI workflows.
Generally, I feel this ignores the inherent value that there is in having an actual understanding of a system.
hibikir|1 year ago
LLMs might become good at this, but nobody seems to be doing much of this commercially just yet
jtlicardo|1 year ago
What I am trying to say is that people who see the output of their work as "code" will be replaced just like human computers did. I believe even debugging will be increasingly aided by AI. I do not believe that AI will eliminate the need for system understanding, just to be clear.
Then again, you might argue that writing lines of code and manually debugging issues is exactly what builds your understanding of the system. I agree with that too, I suppose the challenge will be maintaining deep system knowledge as more tasks become automated.
samantha-wiki|1 year ago
One could argue that for applications where correctness is not critical my position does not apply, however this is not the analogy that the article is making.
jtlicardo|1 year ago
I don't know whether you used some of the more recent models like Claude 3.5 Sonnet and o1. But to me it is very clear where the trajectory is headed. o3 is just around the corner, and o4 is currently in training.
People found value even in a model like GPT 3.5 Turbo, and that thing was really bad. But hey, at least it could write some short scripts and boilerplate code.
You are also comparing mathematical computation - which has only 1 correct solution - with programming, where the solution space is much broader. There are multiple valid solutions. Some are more optimal than others. It is up to the human to evaluate that solution, as I've said in the post. Today, you may even need to fix the LLM's output. But in my experience, I'm finding I need to do this far less often than before.
svantana|1 year ago
[1] https://www.swebench.com/
brink|1 year ago
> Even he, subconsciously, knows it doesn't pay off to waste cognitive energy on what a machine could do instead.
> It's not laziness, but efficiency.
It's only efficient in the short-term, not long-term. Now the programmer never understood the problem, and that is a problem in the long-run. As any experienced engineer knows - understanding problems is how anyone gets better in the engineering field.
jtlicardo|1 year ago
Using LLMs can even help you understand the problem better. And it can bring you towards the solution faster. Using an LLM to solve a problem does not prevent understanding it. Does using a calculator prevent us from understanding mathematical concepts?
Technical understanding will still be valuable. Typing out code by hand will not.
dapperdrake|1 year ago
Effectiveness counts. Effective people rarely need to wag the efficiency dog.
shermantanktop|1 year ago
/s
vishnugupta|1 year ago
I quite liked that analogy about excavator because I think along similar lines. Mechanising drudgery work in construction industry enabled us to build superstructures that are hundreds of stories tall. I wonder if LLMs will enable us to do something similar when it comes to software.
chriswait|1 year ago
Obviously you'll have to debug the code yourself, for which you'll need those programming skills that you claimed weren't relevant any more.
Eventually you'll ask a software engineer, who will probably be paid more than you because "knowing what to build" and "evaluating the end result" are skills more closely related to product management - a difficult and valuable job that just doesn't require the same level of specialisation.
Lots of us have been the engineer here, confused and asking why you took approach X to solve this problem and sheepishly being told "Oh I actually didn't write this code, I don't know how it works".
You are confidently asserting that people can safely skip learning a whole way of thinking, not just some syntax and API specs. Some programmers can be replaced by an LLM, but not most of them.
jtlicardo|1 year ago
You are also making the assumption that LLMs won't improve, which I think is shortsighted.
I fully agree with the part about the job becoming more like product management. I would like to cite an excerpt of a post [2] by Andrew Ng, which I found valuable:
Writing software, especially prototypes, is becoming cheaper. This will lead to increased demand for people who can decide what to build. AI Product Management has a bright future! Software is often written by teams that comprise Product Managers (PMs), who decide what to build (such as what features to implement for what users) and Software Developers, who write the code to build the product. Economics shows that when two goods are complements — such as cars (with internal-combustion engines) and gasoline — falling prices in one leads to higher demand for the other. For example, as cars became cheaper, more people bought them, which led to increased demand for gas. Something similar will happen in software. Given a clear specification for what to build, AI is making the building itself much faster and cheaper. This will significantly increase demand for people who can come up with clear specs for valuable things to build. (...) Many companies have an Engineer:PM ratio of, say, 6:1. (The ratio varies widely by company and industry, and anywhere from 4:1 to 10:1 is typical.) As coding becomes more efficient, teams will need more product management work (as well as design work) as a fraction of the total workforce.
To address your last point - no, I am not saying people should skip learning a whole way of thinking. In fact, the skills I outline for the future (supervising AI, evaluating results) all require understanding programming concepts and system thinking. They do not, however, require manual debugging, writing lines of code by hand, a deep understanding of syntax, reading stack traces and googling for answers.
[1] https://nmn.gl/blog/ai-illiterate-programmers
[2] https://www.deeplearning.ai/the-batch/issue-284/
oscribinn|1 year ago
Reminds me of that one asshole who couldn't find a pen, and wrote a whole NYT article about "the demise of the pen".
dapperdrake|1 year ago
Makes people wish for the demise of stupidity.
andrewfromx|1 year ago
rhelz|1 year ago
psychoslave|1 year ago
So it seems not only very wrong on what is the hard part in programming, but also misguided about where (current) LLM use can shine, including in programming context.
perching_aix|1 year ago
dapperdrake|1 year ago
cess11|1 year ago
No, it only freed us to solve higher-level problems.'
I'm not so sure LLM-based code-assistants/writers are directly comparable to the almost perfectly deterministic logic gate chips and monitors/printers and so on that we call "the computer".
Wordpress is likely a better comparison, which allowed more web sites to come into existence with little effort, but as far as I can tell didn't free anyone to "solve higher-level problems". Deterministic code generation might have to some extent, but it's mainly used in enterprise software where the problem domains are quite old, quite stable, like accounting, and the ontologies haven't changed since before "the computer" and the established knowledge professionals (accountants, 'legal') have a strong dominance over discourse, software rules and how compliance is achieved.
dapperdrake|1 year ago
jablongo|1 year ago
dapperdrake|1 year ago
kibwen|1 year ago
Modern AI assistants have yet to demonstrate greater level of competence than even an inexperienced human programmer. Anyone telling you otherwise has fallen victim to extrapolation.
dapperdrake|1 year ago
Now you can buy a command-line interface to marketing copy at less than $1000 per month that never applies for sick days.
Not saying that I like it, but that seems to be what it is.
redleggedfrog|1 year ago
omnicognate|1 year ago
nyrikki|1 year ago
Always good when a blog author ends with an ad hominem not supported by their claims.
It would have been more useful if they had explained how Rice's theorm was falsified, and that HALT/FRAME have been invalidated.
In my experience, for any problem that is non-trivial, LLMs require more knowledge and tacit experience.
Programming has always been more about domain understanding etc...
Things will change, but this is new tooling which requires new knowledge. I have had to start explaining Diaconescu's Theorem to a lot more people as an example.
joshdavham|1 year ago
hwreG|1 year ago
A photocopier could steal books, a VHS recorder could steal movies, a computer could steal digital works.
LLMs now steal pseudo-legally. Any person without morals could have done the exact same thing before LLM laundromats by just using Google search and GitHub.
"Anyone who tells you otherwise is deluding themselves."
No, you are deluding yourself.
dapperdrake|1 year ago
jt2190|1 year ago
“Eager” programmers find great joy in producing reams of computer code. They prefer to have iron-clad requirements handed to them and then work uninterrupted turning those requirements into a program.
“Reluctant” programmers have problems to solve, and have found that the quickest way to do that is to write code. If solving the problem is faster by not writing code they will do that.
“Computers”, that is, the job formerly performed by humans, favored the “eager” type of person. The people giving work to Computers were “reluctant”, and moved that work to faster tools when they became available.
The issue is that we have armies of “eager” programmers. Can they adapt to become “reluctant” or does the thought of writing less code cause them to resist change?
elzbardico|1 year ago
dapperdrake|1 year ago
unknown|1 year ago
[deleted]
helpfulContrib|1 year ago
[deleted]