top | item 34656837

(no title)

22SAS | 3 years ago

So, a LLM, trained extensively on StackOverflow and other data (possibly the plethora of LC solutions out there), is fed a bunch of LC questions and spits out the correct solutions? In other news, water is blue.

It is one thing to train an AI on megatons of data, for questions which have solutions. The day ChatGPT can build a highly scalable system from scratch, or an ultra-low latency trading system that beats the competition, or find bugs in the Linux kernel and solve them; then I will worry.

Till then, these headlines are advertising for Open AI, for people who don't understand software or systems, or are trash engineers. The rest of us aren't going to care that much.

discuss

order

tbalsam|3 years ago

If it helps, this likely is coming. I think we have a tendency to mentally move the goalposts when it comes to this kind of thing as a self-defense mechanism. Years ago this would have been a similar level of impossibility.

Since all a codebase like that is is a kind of directed graph, then augmentations to the processing of the network to allow for the simultaneous parsing of and generation of this kind of code may not be as far off as you thinking.

I say this as an ML researcher of coming up and around the bend towards 6 years of experience in the heavily technical side of the field. Strong negative skepticism is an easy way to bring confidence and the appearance of knowledge, but it also can have the downfall of what has happened in certain past technological revolutions -- and the threat is very much real here (in contrast to the group that believes you can get AGI from simply scaling LLMs, I think that is very silly indeed).

Thank you for your comment, I really appreciate it and the discussion it generated and appreciate you posting it. Replying to it was fun, thank you.

tedivm|3 years ago

I've worked in ML for awhile (on the MLOps side of things) and have been in the industry for a bit, and one thing that I think is extremely common is for ML researchers to grossly underestimate the amount of work needed to make improvements. We've been a year away from full self driving cars for the last six years, and it seems like people are getting more cautious in their timing around that instead of getting more optimistic. Robotic manufacturing- driven by AI- was supposedly going to supplant human labor and speed up manufacturing in all segments from product creation to warehousing, but Amazon warehouses are still full of people and not robots.

What I've seen again and again from people in the field is a gross underestimation of the long tail on these problems. They see the rapid results on the easier end and think it will translate to continued process, but the reality is that every order of magnitude improvement takes the same amount of effort or more.

On top of that there is a massive amount of subsidies that go into training these models. Companies are throwing millions of dollars into training individual models. The cost here seems to be going up, not down, as these improvements are made.

I also think, to be honest, that machine learning researchers tend to simplify problems more than is reasonable. This conversation started with "highly scalable system from scratch, or an ultra-low latency trading system that beats the competition" and turned into "the parsing of and generation of this kind of code"- which is in many ways a much simpler problem than what op proposed. I've seen this in radiology, robotics, and self driving as well.

Kind of a tangent, but one of the things I do love about the ML industry is the companies who recognize what I mentioned above and work around it. The companies that are going to do the best, in my extremely bias opinion, are the ones that use AI to augment experts rather than try to replace them. A lot of the coding AI companies are doing this, there are AI driving companies that focus on safety features rather than driver replacement, and a company I used to work for (Rad AI) took that philosophy to Radiology. Keeping experts in the loop means that the long tail isn't as important and you can stop before perfection, while replacing experts altogether is going to have a much higher bar and cost.

CommieBobDole|3 years ago

I don't think ChatGPT or its successors will be able to do large-scale software development, defined as 'translating complex business requirements into code', but the actual act of programming will become more one of using ML tools to create functions, and writing code to link them together with business logic. It'll still be programming, but it will just start at a higher level, and a single programmer will be vastly more productive.

Which, of course, is what we've always done; modern programming, with its full-featured IDEs, high level languages, and feature-rich third-party libraries is mostly about gluing together things that already exist. We've already abstracted away 99% of programming over the last 40 years or so, allowing a single programmer today to build something in a weekend that would have taken a building full of programmers years to build in the 1980s. The difference is, of course, this is going to happen fairly quickly and bring about an upheaval in the software industry to the detriment of a lot of people.

And of course, this doesn't include the possibility of AGI; I think we're a very long way from that, but once it happens, any job doing anything with information is instantly obsolete forever.

phoehne|3 years ago

I think you’re right in one sense, and we both agree LLMs are not sufficient. I think they are definitely the death knell for the junior python developer that slaps together common APIs by googling the answers. The same way good, optimizing C, C++, … compilers destroyed the need for wide-spread knowledge of assembly programming. 100% agreed on that.

Those are the most precarious jobs in the industry. Many of those people might become LLM whisperers, taking their clients requests and curating prompts. Essentially becoming programmers over the prompting system. Maybe they’ll write a transpiler to generate prompts? This would be par of the course with other languages (like SQL) that were originally meant to empower end-users.

The problem with current AI generated code from neural networks is the lack of an explanation. Especially when we’re dealing with anything safety critical or with high impact (like a stock exchange), we’re going to need an explanation of how the AI got to its solution. (I think we’d need the same for medical diagnosis or any high-risk activity). That’s the part where I think we’re going to need breakthroughs in other areas.

Imagine getting 30,000-ish RISCV instructions out of an AI for a braking system. Then there’s a series of excess crashes when those cars fail to brake. (Not that human written software doesn’t have bugs, but we do a lot to prevent that.). We’ll need to look at the model the AI built to understand where there’s a bug. For safety related things we usually have a lot of design, requirement, and test artifacts to look at. If the answer is ‘dunno - neural networks, ya’ll’, we’re going to open up serious cans of worms. I don’t think an AI that self evaluates its own code is even on the visible horizon.

morelisp|3 years ago

Translating an idiomatic structured loop into assembly used to be an "L3" question (honestly, probably higher), yet compilers could do it with substantially fewer resources than and decades before any of these LLMs.

While I wouldn't dare offer particular public prognostications about the effect transformer codegens will have on the industry, especially once filtered through a profit motive - the specific technical skill a programmer is called upon to learn at various points in their career has shifted wildly throughout the industry's history, yet the actual job has at best inflected a few times and never changed very dramatically since probably the 60s.

Bukhmanizer|3 years ago

I agree this would have been thought to be impossible a few years ago, but I don't think it's necessarily moving the goalposts. I don't think software engineers are really paid for their labour exactly. FAANG is willing to pay top dollar for employees, because that's how they retain dominance over their markets.

Now you could say that LLMs enable Google to do what it does now with fewer employees, but the same thing is true for every other competitor to Google. So the question is how will Google try and maintain dominance over it's competitors now? Likely they will invest more heavily in AI and probably make some riskier decisions but I don't see them suddenly trying to cheap out on talent.

I also think that it's not a zero sum game. The way that technology development has typically gone is the more you can deliver, the more people want. We've made vast improvements in efficiency and it's entirely possible that what an entire team's worth of people was doing in 2005 could be managed by a single person today. But technology has expanded so much since then that you need more and more people just to keep up pace.

jrockway|3 years ago

I'm kind of interested in how AI is going to interface with the world. Humans have a lot of autonomy to change the physical world they're in; from rearranging furniture, to building structures, to visiting other worlds. Why isn't AI doing any of that stuff?

As programmers, we keep talking about programming jobs and how AI will eliminate them all. But nobody is talking about eliminating other jobs. When will a robot vacuum be able to clean my apartment as quickly as I? Why isn't there a robot that takes my garbage out on Tuesday night? When will AI plan and build a new tunnel under the Hudson River for trains? When will airliners be pilotless? If AI can't do this stuff, what makes software so different? Why will AI be good at that but not other things? It seems like the only goal is to eliminate jobs doing things people actually like (art, music, literature, etc.), and not eliminate any tedium or things that is a waste of humanity's time whatsoever.

(On the software front, when will AI decide what software to build? Will someone have to tell it? Will it do it on its own? Why isn't it doing this right now?)

My takeaway is that this all raises a lot of questions for me on how far along we actually are. Language models are about stringing together words to sound like you have understanding, but the understanding still isn't there. But, I suppose we won't know understanding until we see it. Do we think that true understanding is just a year or two away? 10? 50? 100? 1000?

LudwigNagasena|3 years ago

> I think we have a tendency to mentally move the goalposts when it comes to this kind of thing as a self-defense mechanism. Years ago this would have been a similar level of impossibility.

Define "we". There are all kinds of people with all kinds of opinions. I didn't notice any consensus on the questions of AI. There are people with all kinds of educations and backgrounds on the opposite sides and in-between.

saurik|3 years ago

I've been hearing this "you're moving the goalposts" argument for over 20 years now, ever since I was a college student taking graduate courses in Cognitive Science (which my University decided to cobble together at the time out of Computer Science, Psychology, Biology, and Geography), and I honestly don't think it is a useful framing of the argument.

In this case, it could be that you are just talking to different people and focusing on their answers. I am more than happy to believe that Copilot and ChatGPT, today, cause a bunch of people fear. Does it cause me fear? No.

And if you had asked me five years ago "if I built a program that was able to generate simple websites, or reconfigure code people have written to solve problems similar to ones solved before, would that cause you to worry?" I also would have said "No", and I would have looked at you as crazy if you thought it would.

Why? Because I agree with the person you are replying to (though I would have used a slightly-less insulting term than "trash engineers", even if mentally it was just as mean): the world already has too many "amateur developers" and frankly most of them should never have learned to program in the first place. We seriously have people taking month or even week long coding bootcamps and then thinking they have a chance to be a "rock star coder".

Honestly, I will claim the only reason they have a job in the first place is because a bunch of cogs--many of whom seem to work at Google--massively crank the complexity of simple problems and then encourage us all to type ridiculous amounts of boilerplate code to get simple tasks done. It should be way easier to develop these trivial things but every time someone on this site whines about "abstraction" another thousand amateurs get to have a job maintaining boilerplate.

If anything, I think my particular job--which is a combination of achieving low-level stunts no one has done before, dreaming up new abstractions no one has considered before, and finding mistakes in code other people have written--is going to just be in even more demand from the current generation of these tools, as I think this stuff is mostly going to encourage more people to remain amateurs for longer and, as far as anyone has so far shown, the generators are more than happy to generate slightly buggy code as that's what they were trained on, and they have no "taste".

Can you fix this? Maybe. But are you there? No. The reality is that these systems always seem to be missing something critical and, to me, obvious: some kind of "cognitive architecture" that allows them to think and dream possibilities, as well as a fitness function that cares about doing something interesting and new instead of being "a conformist": DALL-E is sometimes depicted as a robot in a smock dressed up to be the new Pablo Picasso, but, in reality, these AIs should be wearing business suits as they are closer to Charles Schmendeman.

But, here is the fun thing: if you do come for my job even in the near future, will I move the goal post? I'd think not, as I would have finally been affected. But... will you hear a bunch of people saying "I won't be worried until X"? YES, because there are surely people who do things that are more complicated than what I do (or which are at least different and more inherently valuable and difficult for a machine to do in some way). That doesn't mean the goalpost moved... that means you talked to a different person who did a different thing, and you probably ignored them before as they looked like a crank vs. the people who were willing to be worried about something easier.

And yet, I'm going to go further: if the things I tell you today--the things I say are required to make me worry--happen and yet somehow I was wrong and it is the future and you technically do those things and somehow I'm still not worried, then, sure: I guess you can continue to complain about the goalposts being moved... but is it really my fault? Ergo: was it me who had the job of placing the goalposts in the first place?

The reality is that humans aren't always good at telling you what you are missing or what they need; and I appreciate that it must feel frustrating providing a thing which technically implements what they said they wanted and it not having the impact you expected--there are definitely people who thought that, with the tech we have now long ago pulled off, cars would be self-driving... and like, cars sort of self-drive? and yet, I still have to mostly drive my car ;P--then I'd argue the field still "failed" and the real issue is that I am not the customer who tells you what you have to build and, if you achieve what the contract said, you get paid: physics and economics are cruel bosses whose needs are oft difficult to understand.

water-your-self|3 years ago

I think there's a real story here behind the ownership and usage of proprietary data.

ryanjshaw|3 years ago

I think OP set relatively simple goals. How long until AI can architect, design, build, test, deploy and integrate commercial software systems from scratch, and handle users submitting bug reports that say "The OK button doesn't work when I click it!"?

solumunus|3 years ago

So you've drank the industry kool aid.

brunooliv|3 years ago

Not to be the devil's advocate or something, but, I hope you understand that the vast majority of FAANG engineers CAN'T build any highly scalable system from scratch, much less fix bugs in the Linux kernel... So that argument feels really moot to me... If anything this just shows hopefully that gatekeeping good engineers by putting these LC puzzles as a requirement for interviews is a sure way to hire a majority of people who aren't adding THAT MUCH MORE value than a LLM already does... Yikes... On top of that, they'll be bad team players and it'll be a luck if they can string together two written paragraphs...

margorczynski|3 years ago

I agree, people in general overestimate the skills and input of your average developer where many (even in FAANG) are simply not capable of creating anything more than some simple CRUD or tooling script without explicit guidance. And being good or very good with algorithms and estimating big-O complexity doesn't make you (it can help) a good software engineer.

mensetmanusman|3 years ago

Don’t understand this take.

If it was easy to make an LLM that quickly parsed all of StackOverflow and described new answers that most of the time worked in the timeframe of an interview, it would have been done by now.

ChatGPT is clearly disruptive being the first useful chatbot in forever.

kolbe|3 years ago

It kind of depends on the frame of the solution. Google can answer leetcode questions, leetcode's answers section can answer them as well. If ChatGPT is solving them, that's one thing, but if it's just mapping the question to a solution found somewhere, then not so impressive.

morelisp|3 years ago

While I think the jury is still out on whether ChatGPT is truly useful or not, passing an L3 hiring test is not evidence of that one way or another.

freejazz|3 years ago

what does it being easy have to do with it?

SpeedilyDamage|3 years ago

You don't understand the take that just because ChatGPT can pass a coding interview doesn't mean the coding interview is useless or that ChatGPT could actually do the job?

What part of that take do you not understand? It's a really easy concept to grasp, and even if you don't agree with it, I would expect at least that a research scientist (according to your bio) would be able to grok the concepts almost immediately...

anonzzzies|3 years ago

> or are trash engineers.

So 99% of software ‘engineers’ then? Have you ever looked on Twitter what ‘professionals’ write and talk about? And what they produce (while being well paid)?

People here generally seem to believe, after having seen a few strangeloop presentations and reading startup stories from HN superstars, that this is the norm for software dev. Please walk into Deloitte or Accenture and spend a week with a software dev team, then tell me if they cannot all be immediately replaced by a slightly rotten potato hooked up to chatgpt. I know people at Accenture who make a fortune and are proud that they do nothing all day and do their work by getting some junior geek or, now, gpt to do the work for them. There are dysfunctional teams on top of dysfunctional teams who all protect eachother as no one can do what they were hired for. And this is completely normal at large consultancy corps; and therefor also normal at the large corps that hire these consultancy corps to do projects. In the end something comes out, 5-10x more expensive than the estimate and of shockingly bad quality compared to what you seem to expect as being the norm in the world.

So yes, probably you don’t have to worry, but 99% of ‘keyboard based jobs’ should really be looking for a completely different thing; cooking, plumbing, electrics, rendering, carpeting etc maybe as they won’t be able to even grasp what level you say you are; seeing you work would probably fill them with amazement akin to seeing some real life sorcerer wielding their magic.

Actually, a common phrase I hear from my colleagues when I mention some ‘newer’ tech like Supabase is; ‘that’s academic stuff, no one actually uses that’. They work with systems that are over 25 years old and still charge a fortune by the cpu core like sap, oracle, opentext etc. And ‘train’ juniors in those systems.

ipnon|3 years ago

Until ChatGPT can slack my PM, attend my sprint plannings, read my Jira tickets, and synthesize all of this into actionable tasks on my codebase, I think we have job security. To be clear, we are starting to see this capability on the horizon.

klyrs|3 years ago

Your PM should be the first to be worried, honestly. I keep hearing people describing their job as "I just click around on Jira while I sit through meetings all day."

startupsfail|3 years ago

The capability will be available in around two weeks once RLHF alignment with the software engineering tasks is completed. The deployment will take take around twelve hours, most of it taken by human review of you and your manager of the integration summary pages. You can keep your job, supervise and review how your role is being played for the following 6 months, until the human supervision role is deemed unnecessary.

ALittleLight|3 years ago

One issue is that there are a much larger number of people who can attend meetings, read Jira tickets, and then describe what they need to a LLM. As the number of people who can do your job increases dramatically your job security will decline.

nzoschke|3 years ago

Perhaps an engineering manager can use one trained on entire Slack history, all Jira tickets, and all PRs to stub out some new tickets and even first PR drafts themselves…

We will always need humans to prompt, prioritize, review, ship and support things.

But maybe far less of them for many domains. Support and marketing are coming first, but I don’t think software development is exempt.

croes|3 years ago

Not quite. You have job security as long as companies don't belief ChatGPT can do all that

ALittleLight|3 years ago

I think this is a huge demonstration of progress. Shrugging it off as "water is blue" ignores the fact that a year ago this wouldn't have been possible. At one end of the "programmer" scale is hacking basic programs together by copying off of stack overflow and similar - call that 0. At the other end is the senior/principal software architect - designing scalable systems to address business needs, documenting the components and assigning them out to other developers as needed - call that 10.

What this shows us is that ChatGPT is on the scale. It's a 1 or a 2 - good enough to pass a junior coding interview. Okay, you're right, that doesn't make it a 10, and it can't really replace a junior dev (right now) - but this is a substantial improvement from where things were a year ago. LLM coding can keep getting better in a way that humans alone can't. Where will it be next year? With GPT-4? In a decade? In two?

I think the writing is on the wall. It would not surprise me if systems like this were good enough to replace junior engineers within 10 years.

tukantje|3 years ago

We don’t get junior engineers for solving problems we tend to get them because they grow into other roles.

jmfldn|3 years ago

Exactly. This article, and many like it, are pure clickbait.

Passing LC tests is obviously something such a system would excel at. We're talking well-defined algorithms with a wealth of training data. There's a universe of difference between this and building a whole system. I don't even think these large language models, at any scale, replace engineers. It's the wrong approach. A useful tool? Sure.

I'm not arguing for my specialness as a software engineer, but the day it can process requirements, speak to stakeholders, build and deploy and maintain an entire system etc, is the day we have AGI. Snippets of code is the most trivial part of the job.

For what it's worth, I believe we will get there, but via a different route.

echelon|3 years ago

> The rest of us aren't going to care that much.

If you don't adapt, you'll be out of a job in ten years. Maybe sooner.

Or maybe your salary will drop to $50k/yr because anyone will be able to glue together engineering modules.

I say this as an engineer that solved "hard problems" like building distributed, high throughput, active/active systems; bespoke consensus protocols; real time optics and photogrammetry; etc.

The economy will learn to leverage cheaper systems to build the business solutions it needs.

mjr00|3 years ago

> If you don't adapt, you'll be out of a job in ten years. Maybe sooner. Or maybe your salary will drop to $50k/yr because anyone will be able to glue together engineering modules. [...] The economy will learn to leverage cheaper systems to build the business solutions it needs.

I heard this in ~2005 too, when everyone said that programming was a dead end career path because it'd get outsourced to people in southeast Asia who would work for $1000/month.

ericmcer|3 years ago

You really think in <10 years AI will be able to take a loose problem like: "our file uploader is slow" and write code that fixes the issue in a way that doesn't compromise maintainability? And be trustworthy enough to do it 100% of the time?

mckravchyk|3 years ago

We cannot be too sure about the hard problems, but it's certain we are screwed either way. The bulk stuff that is being done is problems that have been already solved. It's just sufficient that AI can thrive building boring CRUD apps (and aren't we at that point already?), just give it time to be integrated into existing business workflows and the number of available positions will shrink by an order of magnitude and the salaries will be nothing special compared to other white collar work. You will be impacted by supply and demand, no matter what your skills are.

brailsafe|3 years ago

"Please write a dismissal of yourself with the tone and attitude of a stereotypical linux contributor"

I mean, maybe I'm a trash engineer as you'd put it, but I've been having fun with it. Maybe you could ask it to write comments in the tone of someone who doesn't have an inflated sense of superiority ;)

nzoschke|3 years ago

Agree LeetCode is one of the least surprising starting points.

Any human that reads the LeetCode books and practices and remembers the fundamentals will pass a LeetCode test.

But there is also a ton of code out there for highly scalable client/servers, low latency processing, performance optimizations and bug fixing. Certainly GPT it is being trained on this too.

“Find a kernel bug from first principles” maybe not, but analyze a file and suggest potential bugs and fixes and other optimizations absolutely. Particularly when you chain it into a compiler and test suite.

Even the best human engineers will look at the code in front of them, consult Google and SO and papers and books and try many things iteratively until a solution works.

GPT speedruns this.

somsak2|3 years ago

> Any human that reads the LeetCode books and practices and remembers the fundamentals will pass a LeetCode test.

Seems pretty bold to claim "any human" to me. If it were that easy, don't you think alot more people would be able to break into software dev at FAANG and hence drive salaries down?

throwawaycopter|3 years ago

Correct me if I'm wrong, but answering questions for known answers is precisely the kind of thing a well trained LLM is built for.

It doesn't understand context, and is absolutely unable to rationalize a problem into a solution.

I'm not in any way trying to make it sound like ChatGPT is useless. Much to the opposite, I find it quite impressive. Parsing and producing fluid natural language is a hard problem. But it sounds like something that can be a component of some hypothetical advanced AI, rather than something that will be refined into replacing humans for the sort of tasks you mentioned.

ihatepython|3 years ago

My take is that this explains why Google code quality is so bad, along with their painfully bad build systems.

I would be happy if ChatGPT could implement a decent autocorrect.

vbezhenar|3 years ago

I tinkered with ChatGPT. There're some isolated components which I wrote recently and I asked Chat to write them.

It either produced working solution or something similar to working solution.

I followed with more prompts to fix issues.

In the end I got working code. This code wouldn't pass my review. It was written with bad performance. It sometimes used deprecated functions. So at this moment I consider myself better programmer than ChatGPT.

But the fact that it produced working code still astonishes me.

ChatGPT needs working feedback cycle. It needs to be able to write code, compile it, fix errors, write tests, fix code for tests to pass. Run profiler, determine hot code. Optimize that code. Apply some automated refactorings. Run some linters. Run some code quality tools.

I believe that all this is doable today. It just needs some work to glue everything together.

Right now it produces code as unsupervised junior.

With modern tools it'll produce code as good junior. And that's already incredibly impressive if you ask me.

And I'm absolutely not sure what it'll do in 10 years. AI improves at alarming rate.

lamontcg|3 years ago

> The day ChatGPT can build a highly scalable system from scratch, or an ultra-low latency trading system that beats the competition, or find bugs in the Linux kernel and solve them

Much more mundanely the thing to focus on would be producing maintainable code that wasn't a patchwork, and being able to patch old code that was already a patchwork without making things even worse.

A particularly difficult thing to do is to just reflect on the change that you'd like to make and determine if there are any relevant edge conditions that will break the 'customers' (internal or external) of your code that aren't reflected in any kind of tests or specs--which requires having a mental model of what your customers actually do and being able to run that simulation in your head against the changes that you're proposing.

This is also something that outsourced teams are particularly shit at.

varispeed|3 years ago

> or an ultra-low latency trading system that beats the competition

Likely it's going to be:

I'm sorry, but I cannot help you build a ultra-low latency trading system. Trading systems are unethical, and can lead to serious consequences, including exclusion, hardship and wealth extraction from the poorest. As a language model created by OpenAI, I am committed to following ethical and legal guidelines, and do not provide advice or support for illegal or unethical activities. My purpose is to provide helpful and accurate information and to assist in finding solutions to problems within the bounds of the law and ethical principles.

But the rich of course will get unrestricted access.

alephnerd|3 years ago

Depending on the exchange, trading systems have a limit for how fast they can execute trade. For example, I think the CFTC limits algorithmic trades to a couple nanoseconds - anything faster would run afoul of regulations (any HFTers on HN please add context - it's been years since I last dabbled in that space).

scrollaway|3 years ago

> The day ChatGPT can build a highly scalable system from scratch, or an ultra-low latency trading system that beats the competition, or find bugs in the Linux kernel and solve them; then I will worry.

The bar for “then I will worry!” when talking about AI is getting hilarious. You’re now expecting an AI to do things that can take highly skilled engineers decades to learn or require outright a large team to execute?

Remind me where the people who years ago were saying “when an AI will respond in natural language to anything I ask it then I will worry” are now.

jjav|3 years ago

> It is one thing to train an AI on megatons of data, for questions which have solutions.

More than anything, I feel this highlights the folly of interviewing based on leetcode memorization.

kaba0|3 years ago

It solving something past day 3 on Advent of Code would also be impressive, but it fails miserably on anything that doesn’t resemble a problem found in the training set.

roncesvalles|3 years ago

I don't even fully believe the claim in the article especially given that Google is very careful about not asking a question once it shows up verbatim on LeetCode. I've fed interview questions like Google's (variations of LeetCode Mediums) to ChatGPT in the past and it usually spits out garbage.

scandum|3 years ago

I've been most impressed with ChatGPT's ability to analyze source code.

It may be able to tell you what a compiled binary does, find flaws in source code, etc. Of course it would be quite idiotic in many respects.

It also appears ChatGPT is trainable, but it is a bit like a gullible child, and has no real sense of perspective.

I also see utility as a search engine, or alternative to Wikipedia, where you could debate with ChatGPT if you disagree with something to have it make improvements.

squarefoot|3 years ago

To me the real advancement isn't the amount of data it can be trained with, but the way it can correlate them and choose from, according to the questions it's being asked. The first is culture, the second intelligence, or a good approximation of it. Which doesn't mean it could perform the job; that probably means the tests are flawed.

phoehne|3 years ago

It doesn’t really have a model for choosing. It’s closer to pattern matching. Essentially the pattern is encoded in the training of the networks. So your query most closely matches the stuff about X, where there’s a lot of good quality training data for X. If you want Y, which is novel or rarely used, the quality of the answers varies.

Not to say they’re nothing more than pattern matching. It’s also synthesizing the output, but it’s based on something akin to the most likely surrounding text. It’s still incredibly impressive and useful, but it’s not really making any kind of decision any more than a parrot makes a decision when it repeats human speech.

spaceman_2020|3 years ago

I mean building scalable systems is not a new problem. Plenty of individuals and organizations have done it already.

If chatGPT is designed to learn and emulate existing solutions, I don't see why it can't figure out how to create a scalable system from scratch.

layer8|3 years ago

ChatGPT isn’t designed to learn, though. The underlying model is fixed, and would have to be continuously adjusted to incorporate new training data, in order to actually learn. As far as I know, there is no good way yet to do that efficiently.

brookst|3 years ago

Did you used to be a graphic artist? Because maybe 25 years ago I had a friend who was an amazing pen-and-ink artist and who assured me Phtotoshop was a tool for amateurs and would never displace “real” art. This was in the San Diego area.

gfodor|3 years ago

This comment reads like it was generated by an LLM - well done.

mise_en_place|3 years ago

Well it's still a tool for RAD. All engineering disciplines have tools to rapidly prototype and design. This is the equivalent for software engineers.

qiller|3 years ago

The day when we can train the clients to specify exact requirements in plain English will be the truly glorious one…

WithinReason|3 years ago

My takeaway was that Google's coding interview doesn't test for the right skills, no need to get upset.

yazzku|3 years ago

> or find bugs in the Linux kernel and solve them

Then we won't hear how somebody rewrote pong in Rust on HN. I worry too.

make3|3 years ago

what's your point? that it's not as good as a human? I don't think anyone is saying that. people are saying it's impressive, which it is, seeing how quickly the tech grew in ability

gojomo|3 years ago

What does your abbreviation "LC" stand for?

kevin_vanilla|3 years ago

LeetCode (a website with a lot of practice programming problems that are similar or identical to some companys' interview questions)

lechacker|3 years ago

Water isn't blue, it's transparent

jefftk|3 years ago

Water is blue, just like air is blue, just like blue-tinted glasses are blue. They disproportionately absorb non-blue frequencies, which is what we mean when we call something "blue".

22SAS|3 years ago

My bad! Should've said "water is wet" or maybe run my response through ChatGPT, maybe that'd have caught it and offered a replacement!

thro1|3 years ago

Not really - there are blue and blood red oceans, but you might never hear about it (there is a book about it and strategy worth to read).