(no title)
22SAS | 3 years ago
It is one thing to train an AI on megatons of data, for questions which have solutions. The day ChatGPT can build a highly scalable system from scratch, or an ultra-low latency trading system that beats the competition, or find bugs in the Linux kernel and solve them; then I will worry.
Till then, these headlines are advertising for Open AI, for people who don't understand software or systems, or are trash engineers. The rest of us aren't going to care that much.
tbalsam|3 years ago
Since all a codebase like that is is a kind of directed graph, then augmentations to the processing of the network to allow for the simultaneous parsing of and generation of this kind of code may not be as far off as you thinking.
I say this as an ML researcher of coming up and around the bend towards 6 years of experience in the heavily technical side of the field. Strong negative skepticism is an easy way to bring confidence and the appearance of knowledge, but it also can have the downfall of what has happened in certain past technological revolutions -- and the threat is very much real here (in contrast to the group that believes you can get AGI from simply scaling LLMs, I think that is very silly indeed).
Thank you for your comment, I really appreciate it and the discussion it generated and appreciate you posting it. Replying to it was fun, thank you.
tedivm|3 years ago
What I've seen again and again from people in the field is a gross underestimation of the long tail on these problems. They see the rapid results on the easier end and think it will translate to continued process, but the reality is that every order of magnitude improvement takes the same amount of effort or more.
On top of that there is a massive amount of subsidies that go into training these models. Companies are throwing millions of dollars into training individual models. The cost here seems to be going up, not down, as these improvements are made.
I also think, to be honest, that machine learning researchers tend to simplify problems more than is reasonable. This conversation started with "highly scalable system from scratch, or an ultra-low latency trading system that beats the competition" and turned into "the parsing of and generation of this kind of code"- which is in many ways a much simpler problem than what op proposed. I've seen this in radiology, robotics, and self driving as well.
Kind of a tangent, but one of the things I do love about the ML industry is the companies who recognize what I mentioned above and work around it. The companies that are going to do the best, in my extremely bias opinion, are the ones that use AI to augment experts rather than try to replace them. A lot of the coding AI companies are doing this, there are AI driving companies that focus on safety features rather than driver replacement, and a company I used to work for (Rad AI) took that philosophy to Radiology. Keeping experts in the loop means that the long tail isn't as important and you can stop before perfection, while replacing experts altogether is going to have a much higher bar and cost.
CommieBobDole|3 years ago
Which, of course, is what we've always done; modern programming, with its full-featured IDEs, high level languages, and feature-rich third-party libraries is mostly about gluing together things that already exist. We've already abstracted away 99% of programming over the last 40 years or so, allowing a single programmer today to build something in a weekend that would have taken a building full of programmers years to build in the 1980s. The difference is, of course, this is going to happen fairly quickly and bring about an upheaval in the software industry to the detriment of a lot of people.
And of course, this doesn't include the possibility of AGI; I think we're a very long way from that, but once it happens, any job doing anything with information is instantly obsolete forever.
phoehne|3 years ago
Those are the most precarious jobs in the industry. Many of those people might become LLM whisperers, taking their clients requests and curating prompts. Essentially becoming programmers over the prompting system. Maybe they’ll write a transpiler to generate prompts? This would be par of the course with other languages (like SQL) that were originally meant to empower end-users.
The problem with current AI generated code from neural networks is the lack of an explanation. Especially when we’re dealing with anything safety critical or with high impact (like a stock exchange), we’re going to need an explanation of how the AI got to its solution. (I think we’d need the same for medical diagnosis or any high-risk activity). That’s the part where I think we’re going to need breakthroughs in other areas.
Imagine getting 30,000-ish RISCV instructions out of an AI for a braking system. Then there’s a series of excess crashes when those cars fail to brake. (Not that human written software doesn’t have bugs, but we do a lot to prevent that.). We’ll need to look at the model the AI built to understand where there’s a bug. For safety related things we usually have a lot of design, requirement, and test artifacts to look at. If the answer is ‘dunno - neural networks, ya’ll’, we’re going to open up serious cans of worms. I don’t think an AI that self evaluates its own code is even on the visible horizon.
morelisp|3 years ago
While I wouldn't dare offer particular public prognostications about the effect transformer codegens will have on the industry, especially once filtered through a profit motive - the specific technical skill a programmer is called upon to learn at various points in their career has shifted wildly throughout the industry's history, yet the actual job has at best inflected a few times and never changed very dramatically since probably the 60s.
Bukhmanizer|3 years ago
Now you could say that LLMs enable Google to do what it does now with fewer employees, but the same thing is true for every other competitor to Google. So the question is how will Google try and maintain dominance over it's competitors now? Likely they will invest more heavily in AI and probably make some riskier decisions but I don't see them suddenly trying to cheap out on talent.
I also think that it's not a zero sum game. The way that technology development has typically gone is the more you can deliver, the more people want. We've made vast improvements in efficiency and it's entirely possible that what an entire team's worth of people was doing in 2005 could be managed by a single person today. But technology has expanded so much since then that you need more and more people just to keep up pace.
jrockway|3 years ago
As programmers, we keep talking about programming jobs and how AI will eliminate them all. But nobody is talking about eliminating other jobs. When will a robot vacuum be able to clean my apartment as quickly as I? Why isn't there a robot that takes my garbage out on Tuesday night? When will AI plan and build a new tunnel under the Hudson River for trains? When will airliners be pilotless? If AI can't do this stuff, what makes software so different? Why will AI be good at that but not other things? It seems like the only goal is to eliminate jobs doing things people actually like (art, music, literature, etc.), and not eliminate any tedium or things that is a waste of humanity's time whatsoever.
(On the software front, when will AI decide what software to build? Will someone have to tell it? Will it do it on its own? Why isn't it doing this right now?)
My takeaway is that this all raises a lot of questions for me on how far along we actually are. Language models are about stringing together words to sound like you have understanding, but the understanding still isn't there. But, I suppose we won't know understanding until we see it. Do we think that true understanding is just a year or two away? 10? 50? 100? 1000?
LudwigNagasena|3 years ago
Define "we". There are all kinds of people with all kinds of opinions. I didn't notice any consensus on the questions of AI. There are people with all kinds of educations and backgrounds on the opposite sides and in-between.
saurik|3 years ago
In this case, it could be that you are just talking to different people and focusing on their answers. I am more than happy to believe that Copilot and ChatGPT, today, cause a bunch of people fear. Does it cause me fear? No.
And if you had asked me five years ago "if I built a program that was able to generate simple websites, or reconfigure code people have written to solve problems similar to ones solved before, would that cause you to worry?" I also would have said "No", and I would have looked at you as crazy if you thought it would.
Why? Because I agree with the person you are replying to (though I would have used a slightly-less insulting term than "trash engineers", even if mentally it was just as mean): the world already has too many "amateur developers" and frankly most of them should never have learned to program in the first place. We seriously have people taking month or even week long coding bootcamps and then thinking they have a chance to be a "rock star coder".
Honestly, I will claim the only reason they have a job in the first place is because a bunch of cogs--many of whom seem to work at Google--massively crank the complexity of simple problems and then encourage us all to type ridiculous amounts of boilerplate code to get simple tasks done. It should be way easier to develop these trivial things but every time someone on this site whines about "abstraction" another thousand amateurs get to have a job maintaining boilerplate.
If anything, I think my particular job--which is a combination of achieving low-level stunts no one has done before, dreaming up new abstractions no one has considered before, and finding mistakes in code other people have written--is going to just be in even more demand from the current generation of these tools, as I think this stuff is mostly going to encourage more people to remain amateurs for longer and, as far as anyone has so far shown, the generators are more than happy to generate slightly buggy code as that's what they were trained on, and they have no "taste".
Can you fix this? Maybe. But are you there? No. The reality is that these systems always seem to be missing something critical and, to me, obvious: some kind of "cognitive architecture" that allows them to think and dream possibilities, as well as a fitness function that cares about doing something interesting and new instead of being "a conformist": DALL-E is sometimes depicted as a robot in a smock dressed up to be the new Pablo Picasso, but, in reality, these AIs should be wearing business suits as they are closer to Charles Schmendeman.
But, here is the fun thing: if you do come for my job even in the near future, will I move the goal post? I'd think not, as I would have finally been affected. But... will you hear a bunch of people saying "I won't be worried until X"? YES, because there are surely people who do things that are more complicated than what I do (or which are at least different and more inherently valuable and difficult for a machine to do in some way). That doesn't mean the goalpost moved... that means you talked to a different person who did a different thing, and you probably ignored them before as they looked like a crank vs. the people who were willing to be worried about something easier.
And yet, I'm going to go further: if the things I tell you today--the things I say are required to make me worry--happen and yet somehow I was wrong and it is the future and you technically do those things and somehow I'm still not worried, then, sure: I guess you can continue to complain about the goalposts being moved... but is it really my fault? Ergo: was it me who had the job of placing the goalposts in the first place?
The reality is that humans aren't always good at telling you what you are missing or what they need; and I appreciate that it must feel frustrating providing a thing which technically implements what they said they wanted and it not having the impact you expected--there are definitely people who thought that, with the tech we have now long ago pulled off, cars would be self-driving... and like, cars sort of self-drive? and yet, I still have to mostly drive my car ;P--then I'd argue the field still "failed" and the real issue is that I am not the customer who tells you what you have to build and, if you achieve what the contract said, you get paid: physics and economics are cruel bosses whose needs are oft difficult to understand.
unknown|3 years ago
[deleted]
water-your-self|3 years ago
ryanjshaw|3 years ago
solumunus|3 years ago
brunooliv|3 years ago
margorczynski|3 years ago
mensetmanusman|3 years ago
If it was easy to make an LLM that quickly parsed all of StackOverflow and described new answers that most of the time worked in the timeframe of an interview, it would have been done by now.
ChatGPT is clearly disruptive being the first useful chatbot in forever.
kolbe|3 years ago
morelisp|3 years ago
freejazz|3 years ago
SpeedilyDamage|3 years ago
What part of that take do you not understand? It's a really easy concept to grasp, and even if you don't agree with it, I would expect at least that a research scientist (according to your bio) would be able to grok the concepts almost immediately...
anonzzzies|3 years ago
So 99% of software ‘engineers’ then? Have you ever looked on Twitter what ‘professionals’ write and talk about? And what they produce (while being well paid)?
People here generally seem to believe, after having seen a few strangeloop presentations and reading startup stories from HN superstars, that this is the norm for software dev. Please walk into Deloitte or Accenture and spend a week with a software dev team, then tell me if they cannot all be immediately replaced by a slightly rotten potato hooked up to chatgpt. I know people at Accenture who make a fortune and are proud that they do nothing all day and do their work by getting some junior geek or, now, gpt to do the work for them. There are dysfunctional teams on top of dysfunctional teams who all protect eachother as no one can do what they were hired for. And this is completely normal at large consultancy corps; and therefor also normal at the large corps that hire these consultancy corps to do projects. In the end something comes out, 5-10x more expensive than the estimate and of shockingly bad quality compared to what you seem to expect as being the norm in the world.
So yes, probably you don’t have to worry, but 99% of ‘keyboard based jobs’ should really be looking for a completely different thing; cooking, plumbing, electrics, rendering, carpeting etc maybe as they won’t be able to even grasp what level you say you are; seeing you work would probably fill them with amazement akin to seeing some real life sorcerer wielding their magic.
Actually, a common phrase I hear from my colleagues when I mention some ‘newer’ tech like Supabase is; ‘that’s academic stuff, no one actually uses that’. They work with systems that are over 25 years old and still charge a fortune by the cpu core like sap, oracle, opentext etc. And ‘train’ juniors in those systems.
ipnon|3 years ago
klyrs|3 years ago
startupsfail|3 years ago
ALittleLight|3 years ago
nzoschke|3 years ago
We will always need humans to prompt, prioritize, review, ship and support things.
But maybe far less of them for many domains. Support and marketing are coming first, but I don’t think software development is exempt.
croes|3 years ago
ALittleLight|3 years ago
What this shows us is that ChatGPT is on the scale. It's a 1 or a 2 - good enough to pass a junior coding interview. Okay, you're right, that doesn't make it a 10, and it can't really replace a junior dev (right now) - but this is a substantial improvement from where things were a year ago. LLM coding can keep getting better in a way that humans alone can't. Where will it be next year? With GPT-4? In a decade? In two?
I think the writing is on the wall. It would not surprise me if systems like this were good enough to replace junior engineers within 10 years.
tukantje|3 years ago
jmfldn|3 years ago
Passing LC tests is obviously something such a system would excel at. We're talking well-defined algorithms with a wealth of training data. There's a universe of difference between this and building a whole system. I don't even think these large language models, at any scale, replace engineers. It's the wrong approach. A useful tool? Sure.
I'm not arguing for my specialness as a software engineer, but the day it can process requirements, speak to stakeholders, build and deploy and maintain an entire system etc, is the day we have AGI. Snippets of code is the most trivial part of the job.
For what it's worth, I believe we will get there, but via a different route.
echelon|3 years ago
If you don't adapt, you'll be out of a job in ten years. Maybe sooner.
Or maybe your salary will drop to $50k/yr because anyone will be able to glue together engineering modules.
I say this as an engineer that solved "hard problems" like building distributed, high throughput, active/active systems; bespoke consensus protocols; real time optics and photogrammetry; etc.
The economy will learn to leverage cheaper systems to build the business solutions it needs.
mjr00|3 years ago
I heard this in ~2005 too, when everyone said that programming was a dead end career path because it'd get outsourced to people in southeast Asia who would work for $1000/month.
ericmcer|3 years ago
mckravchyk|3 years ago
brailsafe|3 years ago
I mean, maybe I'm a trash engineer as you'd put it, but I've been having fun with it. Maybe you could ask it to write comments in the tone of someone who doesn't have an inflated sense of superiority ;)
nzoschke|3 years ago
Any human that reads the LeetCode books and practices and remembers the fundamentals will pass a LeetCode test.
But there is also a ton of code out there for highly scalable client/servers, low latency processing, performance optimizations and bug fixing. Certainly GPT it is being trained on this too.
“Find a kernel bug from first principles” maybe not, but analyze a file and suggest potential bugs and fixes and other optimizations absolutely. Particularly when you chain it into a compiler and test suite.
Even the best human engineers will look at the code in front of them, consult Google and SO and papers and books and try many things iteratively until a solution works.
GPT speedruns this.
somsak2|3 years ago
Seems pretty bold to claim "any human" to me. If it were that easy, don't you think alot more people would be able to break into software dev at FAANG and hence drive salaries down?
throwawaycopter|3 years ago
It doesn't understand context, and is absolutely unable to rationalize a problem into a solution.
I'm not in any way trying to make it sound like ChatGPT is useless. Much to the opposite, I find it quite impressive. Parsing and producing fluid natural language is a hard problem. But it sounds like something that can be a component of some hypothetical advanced AI, rather than something that will be refined into replacing humans for the sort of tasks you mentioned.
ihatepython|3 years ago
I would be happy if ChatGPT could implement a decent autocorrect.
vbezhenar|3 years ago
It either produced working solution or something similar to working solution.
I followed with more prompts to fix issues.
In the end I got working code. This code wouldn't pass my review. It was written with bad performance. It sometimes used deprecated functions. So at this moment I consider myself better programmer than ChatGPT.
But the fact that it produced working code still astonishes me.
ChatGPT needs working feedback cycle. It needs to be able to write code, compile it, fix errors, write tests, fix code for tests to pass. Run profiler, determine hot code. Optimize that code. Apply some automated refactorings. Run some linters. Run some code quality tools.
I believe that all this is doable today. It just needs some work to glue everything together.
Right now it produces code as unsupervised junior.
With modern tools it'll produce code as good junior. And that's already incredibly impressive if you ask me.
And I'm absolutely not sure what it'll do in 10 years. AI improves at alarming rate.
lamontcg|3 years ago
Much more mundanely the thing to focus on would be producing maintainable code that wasn't a patchwork, and being able to patch old code that was already a patchwork without making things even worse.
A particularly difficult thing to do is to just reflect on the change that you'd like to make and determine if there are any relevant edge conditions that will break the 'customers' (internal or external) of your code that aren't reflected in any kind of tests or specs--which requires having a mental model of what your customers actually do and being able to run that simulation in your head against the changes that you're proposing.
This is also something that outsourced teams are particularly shit at.
varispeed|3 years ago
Likely it's going to be:
I'm sorry, but I cannot help you build a ultra-low latency trading system. Trading systems are unethical, and can lead to serious consequences, including exclusion, hardship and wealth extraction from the poorest. As a language model created by OpenAI, I am committed to following ethical and legal guidelines, and do not provide advice or support for illegal or unethical activities. My purpose is to provide helpful and accurate information and to assist in finding solutions to problems within the bounds of the law and ethical principles.
But the rich of course will get unrestricted access.
alephnerd|3 years ago
jahlove|3 years ago
https://twitter.com/MichaelTrazzi/status/1621973895044636672
22SAS|3 years ago
I'd like to respond to this OP (don't have a Twitter account):
https://twitter.com/mSanterre/status/1622015664042164224
I actually have done one of those things. I work in HFT building execution systems for options market making :)
scrollaway|3 years ago
The bar for “then I will worry!” when talking about AI is getting hilarious. You’re now expecting an AI to do things that can take highly skilled engineers decades to learn or require outright a large team to execute?
Remind me where the people who years ago were saying “when an AI will respond in natural language to anything I ask it then I will worry” are now.
jjav|3 years ago
More than anything, I feel this highlights the folly of interviewing based on leetcode memorization.
kaba0|3 years ago
roncesvalles|3 years ago
scandum|3 years ago
It may be able to tell you what a compiled binary does, find flaws in source code, etc. Of course it would be quite idiotic in many respects.
It also appears ChatGPT is trainable, but it is a bit like a gullible child, and has no real sense of perspective.
I also see utility as a search engine, or alternative to Wikipedia, where you could debate with ChatGPT if you disagree with something to have it make improvements.
squarefoot|3 years ago
phoehne|3 years ago
Not to say they’re nothing more than pattern matching. It’s also synthesizing the output, but it’s based on something akin to the most likely surrounding text. It’s still incredibly impressive and useful, but it’s not really making any kind of decision any more than a parrot makes a decision when it repeats human speech.
ioseph|3 years ago
https://en.m.wikipedia.org/wiki/Evolved_antenna
spaceman_2020|3 years ago
If chatGPT is designed to learn and emulate existing solutions, I don't see why it can't figure out how to create a scalable system from scratch.
layer8|3 years ago
brookst|3 years ago
gfodor|3 years ago
mise_en_place|3 years ago
qiller|3 years ago
WithinReason|3 years ago
yazzku|3 years ago
Then we won't hear how somebody rewrote pong in Rust on HN. I worry too.
make3|3 years ago
gojomo|3 years ago
kevin_vanilla|3 years ago
unknown|3 years ago
[deleted]
devinprater|3 years ago
lechacker|3 years ago
jefftk|3 years ago
22SAS|3 years ago
thro1|3 years ago
kyriakos|3 years ago
22SAS|3 years ago