(no title)
sasaf5 | 1 year ago
An year into the project I am forced to revise my opinion. When browsing my code-base I often stumble in abstruse niche solutions for problems that should not have existed. It was clearly the work of someone inexperienced walking through walls in an AI-fuelled coding frenzy.
Having an oracle that knows all answers is useless if you don't know what to ask.
AlwaysRock|1 year ago
Isnt this what code reviews are for? I catch a decent amount of code that looks AI generated. Typically, some very foreign pattern or syntax that this engineers never used nor is common in the codebase. Or something weirdly obtuse that could be refactored and shows a lack of understanding.
Normally I ask something like, "Interesting approach! Is there a reason to do it this way over (mention a similar pattern in our codebase)?" or if it's egregious, I might ask, "Can you explain this to me?".
This feel similar to early career engineers copy pasting stack overflow code. Now its just faster and easier for them to do. It's still fairly easy to spot though.
steve1977|1 year ago
Wouldn't you need to have people with a proper understanding of the programming language and framework to do the code reviews?
vouaobrasil|1 year ago
There is no substitute to doing something correctly in the first place. The problem is that in the real world, deadlines and lack of time will always cause the default solution to be accepted a small percentage of time even when it is not ideal. The increasing creep of AI will only exacerbate that and most technophiles will default into thinking of a new and improved AI tool to help with the problem, until it will be AI tools all the way down.
No thanks.
ungreased0675|1 year ago
A foundational concept of quality control is to not rely on inspection to catch production defects. Why not? It diffuses responsibility, lets more problems get to the customer and is less efficient than doing it correctly to start with.
arrowsmith|1 year ago
Or maybe it's far more common than you realise, and you're only spotting the obvious ones.
bcrosby95|1 year ago
adastra22|1 year ago
debarshri|1 year ago
When I asked him what prompted him to do that, he said copilot suggest it so I just followed. I wonder if you could hijack copilot's results and inject malicious code as many end users does not understand lot of the niche code it generates sometimes, you could manipulate them to add the malicious code to the org's codebase.
vintermann|1 year ago
It might even take the context of the typos in your code comments, and conclude "yeah, this easy to miss subtle error feels right about here".
ben_w|1 year ago
Now I'm wondering, can you put in a comment which the LLM will pay attention to such that it generates subtle back-doors? And can this comment be such that humans looking at the code don't realise this behaviour will be due to the comment?
AshamedCaptain|1 year ago
Apparently, if you tried to access a class member without specifying a class instance, one of Eclipse's "auto-fix-it" suggestions was to make all members of that class static, and he just followed that suggestion blindly.
stouset|1 year ago
This is a widespread problem regardless of AI. Hence the myriad Stack Overflow users who are frustrated after asking insane questions and getting pushback, who then dig their heels in after being told the entire approach they’re using to solve a problem is bonkers and they’re going to run into endless problems continuing down the path their on.
Not that people aren’t on too fine a hair trigger for that kind of response. But the sensitivity of that reaction is a learned defense mechanism for the sheer volume of it.
nottorp|1 year ago
The problem is, SO can't tell someone who asks an insane question from someone who asks the same question but has constraints that make it sane. *
So in time, sane people during unusual stuff stop asking questions and you're left with homework.
* For example, "we can't afford to refactor the whole codebase because some architecture astronaut on SO says so" is a constraint.
Or another nice one is "this is not and will never be a project that will handle google-like volumes of data".
jrockway|1 year ago
ein0p|1 year ago
ben_w|1 year ago
pwdisswordfishc|1 year ago
jongjong|1 year ago
That is a great point. The issue of not asking the right questions has been around as far as I can remember but I guess it wasn't seen as the bottleneck because people were so focused on solving problems by any means possible that they never had to think about solving problems in a simple way. We're still very far from that though and in some ways we have taken steps back. I hope AI will help to shift human focus towards code architecture because that's something that has been severely neglected. Most complex projects I've seen are severely over-engineered... They are complex but they should not have grown to hundreds of thousands of lines of code; had people asked the right questions, focused on the right problems and chosen the right trade-offs, they would have been under 10K lines and way more efficient, interoperable and reliable.
I should note though, that my experience with coding with AI is that it often makes mistakes for complex algorithms, or it implements them in an inefficient way and I almost always have to change them. I get a lot of benefit from asking questions about APIs and to verify my assumptions or if I need a suggestion about possible approaches to do something.
osigurdson|1 year ago
crazygringo|1 year ago
People have been committing terrible code to projects for decades now, long before AI.
The solution is a code review process that works, and accountability if experienced employees are approving commits without properly reviewing them.
AI shouldn't have anything to do with it. Bad code shouldn't be passing review period, no matter if it was AI-assisted or not. And if your org doesn't do code review, then that's the actual problem.
eastbound|1 year ago
You’re putting the entire responsibility on senior employees. So we need much more of them. In fact, we don’t need juniors, because we can generate all possible code combinations. After all, it’s the responsibility of the seniors to select which one is correct.
It’s like how hiring was made crap by the “One-click apply” on LinkedIn and all other platforms. Sure it’s easy for thousands of people to apply. Fact is, we offer quite a good job with high salary, and were looking for 5 people. We’ve spent a full year selecting them, because we’ve receive hundreds of irrelevant applications, probably some AI-generated.
It’s no use to flood a filter with crap, hoping that the filter will do better work because it has a lot of input.
Incipient|1 year ago
AI makes it much easier to push out bad code, fast...in a "frenzied" way one could say.
xanderlewis|1 year ago
The answer is also the same.
Volume. AI makes it trivially easy to generate vast amounts of it that don’t betray their lack of coherence easily. As with much AI content, it creates arbitrary amounts of work for humans to have to sift through in order to know it’s right. And it gives confidence to those who don’t know very much to then start polluting the informationsphere with endless amounts of codswallop.
taneq|1 year ago
Honestly I find this to be the biggest advantage to using a coding LLM. It's like a more interactive debugging duck. By the time I've described my problem in sufficient detail for the LLM to generate a useful answer, I've solved it.
ramones13|1 year ago
gtirloni|1 year ago
antfarm|1 year ago
1-6|1 year ago
Having an oracle that knows how to put a framework of events together (even wit errors) is much better than asking a human to do it from scratch.
shufflerofrocks|1 year ago
This sentence summarizes the issue with the current AI debacle, along with the whole "just copy/pase code from stackoverflow and earn top bucks" meme that was going around in 2010s.
You're not gonna be a valuable dev if you're just write wrong code faster. Not only does chatgpt/copilot give haphazard code half of the time, it approaches seemingly random syntax and format. Even if LLMs are polished, you're gonna need stand software engineering knowledge to know what's right and wrong.
greenie_beans|1 year ago
forgetfreeman|1 year ago
codegladiator|1 year ago
joshstrange|1 year ago
In my code reviews the person who wrote the code needs to explain to me what they changed and why. If they can’t then we are going to have a problem. If you don’t understand the code that an LLM spits out you don’t use it, it’s that simple. If you use it and can’t explain it, well… we are going to have to have some discussions and if it keeps happening you’re going to need to find other employment.
The exact same thing has been happening for pretty much the entire time we’ve had internet. Stack Overflow being the primary example now but there were plenty of other resources before SO. People have always been able to copy/paste code they don’t understand and shove it into a codebase. LLMs make that easier, no doubt, but the core issue has always been there and we, as an industry, have had decades to come up with defenses to this. Code review being the best tool in our toolbox IMHO.
chx|1 year ago
But that's not what these LLM systems are. https://hachyderm.io/@inthehands/112006855076082650
> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.
> Alas, that does not remotely resemble how people are pitching this technology.
It is exactly what happened to you: it wrote bullshit. Plausible bullshit but bullshit nonetheless.
teeray|1 year ago
beeboobaa3|1 year ago
no-mana|1 year ago
chillfox|1 year ago
romeros|1 year ago
A significant problem is the subconscious defense mechanism or bias that compels us to conclude that AI has various shortcomings, asserting the ongoing need for status quo.
The capabilities of GPT-3.x in early 2023 pale in comparison to today's AI, and it will continue to evolve and improve.
signatoremo|1 year ago
https://news.ycombinator.com/item?id=27771186
Yet people don’t like it in this thread. Does it touch a nerve?
raincole|1 year ago
You just need to ask it what to ask. /s
loceng|1 year ago