All this talk about chatgpt replacing programmers and skill atrophy - meanwhile all I'm getting out of chatgpt is bullshit and hallucinations. Copilot is amazing at boilerplate, but that's about it - I don't even read any suggestions that don't fall into that category anymore.
Copilot is amazing because it lets me stay in the flow when I need to churn out stuff (when I already know what I want). I would pay over 100$/month for a faster/less jittery copilot.
ChatGPT is cheap at 20$/month but not even worth that price.
I think there's two camps of think-piece authors emerging - the ones who've tried a bunch of different examples of things and got passable-to-great-looking results; and the ones going deeper into specific areas and hitting the wall in terms of expertise and specificity. Using the GPT 4 API, I'm definitely often hitting limitations, especially around depth of information, and having to "prompt engineer" my way around them. After using a dozen prompt variants to try to prod it in the direction I want without seeing it reflect those changes, a bit of the magic wears off.
I'm bearish on the idea of long-term prompt engineering being a big skillset since I imagine the "understanding the prompt" side of the tools will get better, but I don't see it necessarily getting around the need for specificity of input. It feels like writing a task ticket and giving it to a junior person - what you get back might not be what you need, and a lot of time the true difficulty is knowing exactly what you need up front. Reducing that cycle time is wonderful, but doesn't replace the hard earned skills of knowing what to make.
I'm surprised with this response. I myself have found it extremely useful and ChatGPT has saved me tons of time with programming and non-programming tasks.
Yup. I always wonder why is my experience not like others? Are those PR people for Microsoft?
Example:
I gave chatgpt a list. Which looked like
st street
av avenue
Convert this to yaml format as
st:
name:
street
And so on.
It failed spectacularly. Not even once but about 10 times. Even if it succeeded, it kept changing the output by doing ops which I never mentioned in the prompt (like reordering and merging duplicated values to a single key)
Much like the evolving layers of abstraction in technology, the necessity for individuals to master the intricate, underlying layers wanes as time progresses. The article brings to mind the analogy of Assembly programming; while there are undoubtedly experts in this domain whose contributions are indispensable, the majority of programmers can comfortably rely on higher-level abstractions without delving into the complexities of Assembly. As AI continues to advance, general-purpose programming appears to be following a similar trajectory. Individuals have the choice to either become one of the few specialists at that level or to embrace and harness the emerging abstractions, thereby unlocking greater efficiency and innovation in their endeavors. It is crucial to strike a balance, recognizing the importance of both mastery and leveraging AI to optimize our creative potential.
> while there are undoubtedly experts in this domain whose contributions are indispensable, the majority of programmers can comfortably rely on higher-level abstractions without delving into the complexities of Assembly.
True. You don't need to be an assembly expert to be a great programmer, but I do think you need to have a solid understanding of how computers work all the way down to the CPU level in order to be a great programmer. And once you have that knowledge, assembly isn't a big hurdle anyway.
Yeah, and you can thank those abstractions that today, we have Microsoft bragging that they got the Teams chat client down to only 9 seconds loading time on a machine with several multi-GHz cores and gigabytes of RAM. But that's all fine I guess, because it's good enough.
Good enough for what? Good enough that people will grudgingly use it because instantly quitting their job over it would be overblown. Their life will only become a little bit worse through it.
I'm frankly scared how AI-created "good enough" software will look.
I think about it in the same terms. We make better and better abstractions which makes it easier and easier to code. Usually with some performance loss that better hardware can cover.
The implication of that is this will have a similar impact as previous abstractions.
But two questions:
1. Does this scale?
2. Is the type of difference between Python and Assembly the same as between Python and AI Codegen?
just makes me want to combine compilers with understanding of LLMs and how chatGPT does what it does to make real next-generation (but really next level) programing languages.
LLMs will make distinctions like functional, dynamic, procedural, ... become obsolete just like all assembly languages are now just assembly, but in fact used to vary by architecture (and even by each architecture's generation/version)
assembly -> C -> something made with a compiler which embeds chatGTP
In a way, people who are concerned about skills atrophying are repeating the cycle of previous generations, shaking their fist at higher level programming languages or drawing digital images. How dare you not keep the skills of those who came before you! Time will not be so kind to this view.
Some skills will atrophy, as article mentions we don't really care about road navigation anymore. But this is not a bad thing, we can focus our concerns elsewhere. Do you care exactly how your home is heated? Do you know how to maintain every part of your car? Do you spend every weekend keeping your skill on sewing up to date? Some of you might do all of these, but the point is we have let so many skills become things relegated to experts/tech that once used to be common.
It's this relegation that has allowed experts to become even deeper experts in their niche. So I don't worry about skills atrophying at all. There will always be a new crop of nerds (affectionately) to obsess over whatever niche can exist.
Its not a bad thing until its a bad thing. There are certainly computer related tasks I am happy to relegate to AI, my concern is that we are already bad at maintaining software made by humans. If old school software engineering becomes an arcane art as everyone becomes prompt engineers and software starts to explode in complexity we better hope AI learns to maintain it too.
> In a way, people who are concerned about skills atrophying are repeating the cycle of previous generations, shaking their fist at higher level programming languages or drawing digital images.
Maybe some.
But I resonate with the title because I noticed I started leaning on ChatGPT to avoid having to think through things. I found myself tweaking ChatGPT prompts and hoping for a better response when the previous one wasn’t good enough. Often a better response is not found, then I have to struggle to start thinking for myself.
Over time that leads to less repetition thinking through problems and more difficultly thinking through problems when you can’t use ChatGPT. Is that a good trade off?
I feel the same way I think. These days, I am happy when I am able to use GPT4 to complete a task faster or easier, but am also happy when I write some code "by myself".
However, as far as the conclusion of the article. This "age" started only a few months ago. If you are going to call it an age, then the conclusion doesn't hold up. In the long term, programming "by hand" is likely to be similar to wood carving today. It will be an artistic rather than utilitarian pursuit.
There are already multiple services and tools being built with the purpose of writing, deploying, and maintaining software in a 100% automated way using GPT3.5 and GPT4 and natural language specifications. I am building one of them.
We can't assume that these recent efforts to freeze the progress of AI will be successful. So we should anticipate very significant improvements in performance over the next few years.
Very shortly, everyone will realize that it is quite a huge waste of time and money to wait for a person to write code.
Funny to hear that. I'm closer to 60 than 50, and it never occurs to me to use GPT to write my code. So I wonder if it's partly based on how long someone has been doing something. In retrospect, I never copied code from StackOverflow either (have used it occasionally to get ideas, but mostly felt the code I'd find there just wasn't very good, or at least was too different from my personal coding style to be something I'd want to use verbatim.
In the end I agree with you though -- if GPT code is "good enough" then paying people to write code will soon be looked at like using horse-drawn wagons to get from place to place.
Freedom? You've got a weird definition of "freedom" if you think surrendering your autonomy to whoever happens to be running the services you rely on is "freedom."
The problem is that long term, you won't have any idea if what it's spitting out is reasonable or not. We already have more than enough problems with systems being too complex to understand. Solving that with "more complexity we can't understand" doesn't seem a great solution, personally.
"The thing I’m most excited about in our weird new AI-enhanced reality is the way it allows me to be more ambitious with my projects. In the past I’ve had plenty of ideas for projects which I’ve ruled out because they would take a day—or days—of work to get to a point where they’re useful. I have enough other stuff to build already!"
If these projects are somehow meaningful to you on a personal level or in a small circle, great. However, let's not look past the fact that pretty much every human will achieve this super productivity. Hence, in a broader sense most projects will not be interesting, competitive...or more likely...looked at because of sheer abundance.
The other thought I'm having is regarding the rate of change in mastering AI. It's frankly inhumane. You can work in tech and thrive on a tech stack for some 5 years, sometimes 10. Working the AI...whatever you learn is outdated the next week.
I am an existentialist: I am perfectly comfortable in a world with no inherent meaning. Sheer existence and action brings me joy. I spend hours of my day tinkering with computers not because of utility, but for the pleasure it brings.
I think if we could elicit this persons thoughts thoroughly we would find "meaning" underlying their perceived joy.
You can elicit my thoughts to your heart’s content. Of course there’s meaning. That’s different from inherent meaning (imbued into our world via a god or something).
> With ChatGPT, it’s too easy to implement ideas without understanding the underlying concepts or even what the individual lines in a program do. Doing so for unfamiliar tasks means failing to learn something new, while doing so for known tasks leads to skill atrophy.
Imagine this: A hypothetical "GPT 7" tech can effortlessly create a starship capable of shuttling you from Earth to Mars. It's so reliable that after 100 uses, it never fails. Perfect flights, precise landings, and zero issues. All it takes is a simple command prompt "design a starship to Mars". In this scenario, is there a need to learn the intricacies of aerodynamics, gravitational forces, or rocket science?
The notion here is to liberate ourselves from limitations and focus on what truly interests us. That's the end game.
Someone has to. The "humanity designed tech that it became reliant on and forgot how to reimplement, so when it broke down nobody could fix it" is a recurring scifi trope.
I asked GPT-4 for the Imagemagick command to make the white parts of an image, semi-transparent.
It generated a command that made the fuzzy white parts [+1 on fuzzy] fully transparent [bad].
I told it that the result is not semi-transparent.
It apologized and gave me another command that produced a blank image. In another case a grayish image.
I told it this is not what I wanted, and it just looped here saying I'm sorry and giving me one of these above solutions.
As a matter of fact, this looping back and forth between half-working and non-working solutions is something that I've experienced every time when the first result was not what I asked...
Aside from the possibility of "emerging intelligence", I don't think this is the way to the AGI.
> Aside from the possibility of "emerging intelligence", I don't think this is the way to the AGI.
My intuition is that this is the only way to create AGI. I don't think anyone is ever going to carefully intentionally construct an AGI, it's almost certainly going to emerge from something conceptually fairly simple.
I don't think it's impossible that our own brains are also basically just a big statistical prediction model too. Maybe AGI just requires our models to be 10/100/1000x as good. Or our training data needs to be broader in a qualitative way rather than a quantitative way that we haven't quite worked out yet.
Imagine you’ve only been a programmer for 2 years. Imagine not knowing what imagemagick is. Now get an answer that gets you 99% of the way to where you want to be. Now look at the documentation to see why the parameters aren’t doing what you want them to do. You just saved hours of work.
I’ve been using AI to teach me concepts, and it’s great at it. It can sometimes be wrong, so having familiarity with the topic is important in letting it teach you. But OpenAI knows programming languages really well. It’s been amazing at teaching me concepts. Then I go test it to verify its teachings. It is certainly making me more productive at learning.
I feel like, if anything, AI will push people to upskill. A) it's obviously necessary, B) you will increasingly find yourself increasingly identifying edge cases, C) if you're in an exposed field (e.g. diagnostic medicine) you will find fewer people coming into the field, so there won't be a lot of backfills in the workforce.
chatgpt is hit or miss but ive had success using it either way. The hard part will be not relying too much on it. We already know that some devs wont read error messages and instead will just jump into debugging, i trust thisll only get moreso with the advent of chatgpt. However, this might actually save those devs, assuming chatgpt answers correctly.
once you can upload images and ask it to answer based on that, itll be even more wild.
moonchrome|2 years ago
Copilot is amazing because it lets me stay in the flow when I need to churn out stuff (when I already know what I want). I would pay over 100$/month for a faster/less jittery copilot.
ChatGPT is cheap at 20$/month but not even worth that price.
majormajor|2 years ago
I'm bearish on the idea of long-term prompt engineering being a big skillset since I imagine the "understanding the prompt" side of the tools will get better, but I don't see it necessarily getting around the need for specificity of input. It feels like writing a task ticket and giving it to a junior person - what you get back might not be what you need, and a lot of time the true difficulty is knowing exactly what you need up front. Reducing that cycle time is wonderful, but doesn't replace the hard earned skills of knowing what to make.
13years|2 years ago
However, I never was able to get it to write a successful function for anything that would have been useful. It got it wrong every time.
chupaolo|2 years ago
AverageDude|2 years ago
Example:
I gave chatgpt a list. Which looked like
st street
av avenue
Convert this to yaml format as
And so on.It failed spectacularly. Not even once but about 10 times. Even if it succeeded, it kept changing the output by doing ops which I never mentioned in the prompt (like reordering and merging duplicated values to a single key)
jason-phillips|2 years ago
GiGo, basically
876978095789789|2 years ago
Are you using GPT3 or 4?
msoad|2 years ago
JohnFen|2 years ago
True. You don't need to be an assembly expert to be a great programmer, but I do think you need to have a solid understanding of how computers work all the way down to the CPU level in order to be a great programmer. And once you have that knowledge, assembly isn't a big hurdle anyway.
xg15|2 years ago
Good enough for what? Good enough that people will grudgingly use it because instantly quitting their job over it would be overblown. Their life will only become a little bit worse through it.
I'm frankly scared how AI-created "good enough" software will look.
nonethewiser|2 years ago
The implication of that is this will have a similar impact as previous abstractions.
But two questions:
1. Does this scale?
2. Is the type of difference between Python and Assembly the same as between Python and AI Codegen?
livelielife|2 years ago
LLMs will make distinctions like functional, dynamic, procedural, ... become obsolete just like all assembly languages are now just assembly, but in fact used to vary by architecture (and even by each architecture's generation/version)
assembly -> C -> something made with a compiler which embeds chatGTP
waboremo|2 years ago
Some skills will atrophy, as article mentions we don't really care about road navigation anymore. But this is not a bad thing, we can focus our concerns elsewhere. Do you care exactly how your home is heated? Do you know how to maintain every part of your car? Do you spend every weekend keeping your skill on sewing up to date? Some of you might do all of these, but the point is we have let so many skills become things relegated to experts/tech that once used to be common.
It's this relegation that has allowed experts to become even deeper experts in their niche. So I don't worry about skills atrophying at all. There will always be a new crop of nerds (affectionately) to obsess over whatever niche can exist.
mrbombastic|2 years ago
nonethewiser|2 years ago
Maybe some.
But I resonate with the title because I noticed I started leaning on ChatGPT to avoid having to think through things. I found myself tweaking ChatGPT prompts and hoping for a better response when the previous one wasn’t good enough. Often a better response is not found, then I have to struggle to start thinking for myself.
Over time that leads to less repetition thinking through problems and more difficultly thinking through problems when you can’t use ChatGPT. Is that a good trade off?
chupaolo|2 years ago
ilaksh|2 years ago
However, as far as the conclusion of the article. This "age" started only a few months ago. If you are going to call it an age, then the conclusion doesn't hold up. In the long term, programming "by hand" is likely to be similar to wood carving today. It will be an artistic rather than utilitarian pursuit.
There are already multiple services and tools being built with the purpose of writing, deploying, and maintaining software in a 100% automated way using GPT3.5 and GPT4 and natural language specifications. I am building one of them.
We can't assume that these recent efforts to freeze the progress of AI will be successful. So we should anticipate very significant improvements in performance over the next few years.
Very shortly, everyone will realize that it is quite a huge waste of time and money to wait for a person to write code.
SoftTalker|2 years ago
In the end I agree with you though -- if GPT code is "good enough" then paying people to write code will soon be looked at like using horse-drawn wagons to get from place to place.
hgsgm|2 years ago
Modern web app developers rarely write HTML by hand. They generate it from higher level language or templates.
nico|2 years ago
Just surrender and enjoy the freedom.
Of course you can always choose, just like you can choose to remember people’s phone numbers and driving directions. But you don’t need it.
Syonyk|2 years ago
The problem is that long term, you won't have any idea if what it's spitting out is reasonable or not. We already have more than enough problems with systems being too complex to understand. Solving that with "more complexity we can't understand" doesn't seem a great solution, personally.
nonethewiser|2 years ago
You do need to if your phone can’t store certain phone numbers. And you will be worse at it if you don’t practice.
Why can’t it store certain numbers? It’s impossible to know. Which phone numbers won’t it be able to store? It’s impossible to know.
dahwolf|2 years ago
If these projects are somehow meaningful to you on a personal level or in a small circle, great. However, let's not look past the fact that pretty much every human will achieve this super productivity. Hence, in a broader sense most projects will not be interesting, competitive...or more likely...looked at because of sheer abundance.
The other thought I'm having is regarding the rate of change in mastering AI. It's frankly inhumane. You can work in tech and thrive on a tech stack for some 5 years, sometimes 10. Working the AI...whatever you learn is outdated the next week.
13years|2 years ago
I think if we could elicit this persons thoughts thoroughly we would find "meaning" underlying their perceived joy.
dmazin|2 years ago
comment_ran|2 years ago
Imagine this: A hypothetical "GPT 7" tech can effortlessly create a starship capable of shuttling you from Earth to Mars. It's so reliable that after 100 uses, it never fails. Perfect flights, precise landings, and zero issues. All it takes is a simple command prompt "design a starship to Mars". In this scenario, is there a need to learn the intricacies of aerodynamics, gravitational forces, or rocket science?
The notion here is to liberate ourselves from limitations and focus on what truly interests us. That's the end game.
cheald|2 years ago
m0llusk|2 years ago
ducktective|2 years ago
It generated a command that made the fuzzy white parts [+1 on fuzzy] fully transparent [bad].
I told it that the result is not semi-transparent.
It apologized and gave me another command that produced a blank image. In another case a grayish image.
I told it this is not what I wanted, and it just looped here saying I'm sorry and giving me one of these above solutions.
As a matter of fact, this looping back and forth between half-working and non-working solutions is something that I've experienced every time when the first result was not what I asked...
Aside from the possibility of "emerging intelligence", I don't think this is the way to the AGI.
p1necone|2 years ago
My intuition is that this is the only way to create AGI. I don't think anyone is ever going to carefully intentionally construct an AGI, it's almost certainly going to emerge from something conceptually fairly simple.
I don't think it's impossible that our own brains are also basically just a big statistical prediction model too. Maybe AGI just requires our models to be 10/100/1000x as good. Or our training data needs to be broader in a qualitative way rather than a quantitative way that we haven't quite worked out yet.
texuf|2 years ago
simonw|2 years ago
Was your conclusion "GPT-4 isn't any good for imagemagick commands" or "GPT-4 isn't useful for anything"?
ilaksh|2 years ago
mattcantstop|2 years ago
killjoywashere|2 years ago
abledon|2 years ago
hgsgm|2 years ago
The only crucial difference is that we currently lack competition and self-ownes versions.
popotamonga|2 years ago
penjelly|2 years ago
once you can upload images and ask it to answer based on that, itll be even more wild.
muhammadusman|2 years ago
I've started doing this, getting some boilerplate or even throwaway code from ChatGPT helps me validate ideas quicker than if I wasn't using it.