top | item 44257283

2025 State of AI Code Quality

45 points| cliffly | 8 months ago |qodo.ai

51 comments

order

ilitirit|8 months ago

I currently have a big problem with AI-generated code and some of the junior devs on my team. Our execs keep pushing "vibe-coding" and agentic coding, but IMO these are just tools. And if you don't know how to use the tools effectively, you're still gonna generate bad code. One of the problems is that the devs don't realise why it's bad code.

As an example, I asked one of my devs to implement a batching process to reduce the number of database operations. He presented extremely robust, high-quality code and unit tests. The problem was that it was MASSIVE overkill.

AI generated a new service class, a background worker, several hundred lines of code in the main file. And entire unit test suites.

I rejected the PR and implemented the same functionality by adding two new methods and one extra field.

Now I often hear comments about AI can generate exactly what I want if I just use the correct prompts. OK, how do I explain that to a junior dev? How do they distinguish between "good" simple, and "bad" simple (or complex)? Furthermore, in my own experience, LLMs tend to pick up to pick up on key phrases or technologies, then builds it's own context about what it thinks you need (e.g. "Batching", "Kafka", "event-driven" etc). By the time you've refined your questions to the point where the LLM generate something that resembles what you've want, you realise that you've basically pseudo-coded the solution in your prompt - if you're lucky. More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over. This is also something that junior devs don't seem to understand.

I'm still bullish on AI-assisted coding (and AI in general), but I'm not a fan at all of the vibe/agentic coding push by IT execs.

vitaflo|8 months ago

It’s difficult to do the hard work of you haven’t done the easy work 10,000 times. And we tend to get paid for the hard work.

LLMs remove the easy work from the junior devs task pile. That will make it a lot more difficult for them to do the actual hard work required of a dev. They skipped the stepping stones and critical thinking phase of their careers.

Senior devs are senior because they’ve done the easy things so often it’s second nature.

tails4e|8 months ago

Exactly this. If a junior dev is never exposed to the task of reasoning about code themselves, they never will know what the difference between good and bad code is. Code based will be littered with code that doe the job functionally, but is not good code, and technical debt will accumulate. Surely this can't be good for the junior Devs or the code bases long term?

hiq|8 months ago

> OK, how do I explain that to a junior dev?

They could iterate with their LLM and ask it to be more concise, to give alternative solutions, and use their judgement to choose the one they end up sending to you for review. Assuming of course that the LLM can come up with a solution similar to yours.

Still, in this case, it sounds like you were able to tell within 20s that their solution was too verbose. Declining the PR and mentioning this extra field, and leaving it up to them to implement the two functions (or equivalent) that you implemented yourself would have been fine maybe? Meaning that it was not really such a big waste of time? And in the process, your dev might have learned to use this tool better.

These tools are still new and keep evolving such that we don't have best practices yet in how to use them, but I'm sure we'll get there.

cies|8 months ago

I think this is where functional style and strong types come in handy: they make it harder to write bad code that looks innocent.

In part this is because the process of development leans less hard on the discipline of devs; humans. Code becomes more formal.

I regularly I have a piece of vibe-coded code in a strongly typed language, and it does not compile! (would that count as a hallucination?) I have thought many times: in Python/JS/Ruby this would just run, and only produce a runtime error in some weird case that likely only our customers on production will find...

diggan|8 months ago

> Our execs keep pushing "vibe-coding"

Imagine if wat (https://www.destroyallsoftware.com/talks/wat) appeared on the internet, and execs took it serious and suddenly asked people to actually explicitly make everything into JS.

This is how it sounds when I hear executives pushing for things like "vibe-coding".

> More often than not the LLM responses just start degrading massively to the point where they become useless and you need to start over

Yeah, this is true. The trick is to never go beyond one response from the LLM. If they get it wrong, start over immediately with a rewritten prompt so they get it right on the first try. I'm treating "the LLM got it wrong" as "I didn't make the initial user/system prompt good enough", not as in "now I'm gonna add extra context to try to steer it right".

imiric|8 months ago

This is the same catch-22 LLMs have had since their inception. They're only useful for domain experts who can review their output. Any non-experts using them will never become experts, because they're expecting the LLM to do the job for them. Unless they use the tool as an assistant that explains the underlying concepts, which can also be wrong, and they do the bulk of the work themselves, their skills will stagnate or deteriorate.

h3lp|8 months ago

I see an analogy to the discussions from my youth about compilers vs. assembly language programmers. It is still true that assembly is required to write high performance primitives, and that a competent assembly programmer will always beat a good compiler on a small function---but a compiler will consistently turn out decent and correct code for the entire project. So, basically, the compilers won, and assembly is relegated to be an important but niche skill.

bee_rider|8 months ago

It would be kinda cool if we could write pseudocode on a whiteboard or a notebook and have the computer spit out a real program.

h1fra|8 months ago

wait a couple of years, the junior will still not know how to code and companies will need someone with experience to fix all the mess $$$

jmsdnns|8 months ago

> 25% of developers 1 in 5 AI-generated suggestions estimate that contain factual errors or misleading code.

I cannot believe what's said in the report because it doesnt even reflect what my pro-AI coding friends say is true. Every dev I know says AI generated suggestions are often full of noise, even the pro-AI folks.

bluefirebrand|8 months ago

I think this really highlights the difference between "pro ai" and "anti ai" people

"It's full of noise but I'm confident I can cut through it to get to the good stuff" - Pro AI

"It's full of noise and it takes more effort to cut through than it would take to just build it myself" - Anti AI

I'm pretty Anti myself. I think "I can cut through the noise" is pretty misplaced overconfidence for a lot of devs

ben_w|8 months ago

Why does:

> 25% of developers estimate that 1 in 5 AI-generated suggestions contain factual errors or misleading code.

Seem incompatible with "often full of noise", to you?

I can't speak for factual errors, but I'd say less than 20% of the code ChatGPT* gives me contains clear errors — more like 10%. Perhaps that just means I can't spot all the subtle bugs.

But even in the best case, there's a lot of "noise" in the answers they give me: Excess comments that don't add anything, a whole class file when I wanted just a function, that kind of thing.

* Other LLMs are different, and I've had one (I think it was Phi-2) start bad then switch both task *and language* mid-way through.

elpocko|8 months ago

I wish LLMs were generally viewed as Eliza on steroids, a thing to generate plausible sounding text with, in places where we used primitive generators based on Markov models before. To implement smarter NPCs in games, and virtual chat partners to talk to, just for fun. They are, after all, really fun to play with. They should be used as smart autocomplete in your IDE, not to generate whole projects from scratch. As an idea generator when you're stuck.

This requirement to be commercially useful and valuable, and to aid all kinds of businesses everywhere, gave a bad reputation to what is otherwise an amazing technological achievement. I am an outspoken AI enthusiast, because it is fun and interesting, but I hate how it is only seen as useful when it can do actual work like a human.

wbharding|8 months ago

It's hard to reconcile how 59% of devs in their survey are "confident" AI is improving their code quality, with prior empirical research that shows a surge in added & copy/pasted lines w/ a corresponding drop in moved (refactored) lines https://www.gitclear.com/ai_assistant_code_quality_2025_rese...

My experience (using a mix of Copilot & Cursor through every day) is that AI has become very capable of solving problems of low-to-intermediate complexity. But it requires extreme discipline to vet the code afterward for the FUD and unnecessary artifacts that sneak in alongside the "essential" code. These extra artifacts/FUD are to my mind the core of what will make AI-generated code more difficult to maintain than human-authored code in the long-term.

hiq|8 months ago

I'd be interested in seeing comparisons between languages. I expect that a terse language with an expressive type system (is that Haskell maybe?) can lead to way better results in terms of usefulness than, say, bash, because I can rely on the type system and the compiler to have gotten rid of some basic mistakes, and I can read the code faster (since it's more concise).

I've mostly used LLMs with python so far and I'm looking forward to using them more with compiled languages where at least I won't have mismatching types a compiler would have detected without my help.

hippari2|8 months ago

I think what really matters is how much code of that language is on StackOverflow :)

sathomasga|8 months ago

Survey from a company that's in the business of AI coding and thus has a monetary interest in promoting the technology. No details on who conducted the survey (the company itself?) or how the 609 respondents were selected. If limited to the company's own customers, massive selection bias. The results may or may not reflect reality, but this "report" is just marketing bullshit.

esafak|8 months ago

Lots of numbers. I'm interested in seeing the trends over time. I bet with their products they could track this daily.

diggan|8 months ago

> are 2.5x more likely to merge code without reviewing it

What the fuck? Are people taking "vibe coding" as a serious workflow? No wonder people's side projects feel more broken and buggy than before. Don't get me wrong, I "work with" LLMs, but I'd never merge/use any code that I didn't review, none of the models or tooling is mature enough for that.

Really strange how some people took a term that supposed to be a "lol watch this" and started using it for work...

dartos|8 months ago

> Really strange how some people took a term that supposed to be a "lol watch this" and started using it for work...

don't forget about the insane amount of marketing around AI code companies and how they put "vibe coding" in front of everyone's face all the time.

You tell someone something enough times and they'll belive it

orangebread|8 months ago

Not for nothing, but I did create an entire game in browser using phaser as the engine.

But I'm also an experienced developer and at this point, an experienced "vibe coder". I use that last term loosely because I have a structured set of rules I have AI follow.

To really understand AI's capability you have to have experienced it in a meaningful way with managed expectations. It's not going to nail what you want right away. This is also why I spend a lot of time up front to design my features before implementing.

msgodel|8 months ago

I've done it for "nice to have" features in their own modules that I don't really care about and aren't consumed by anything else (recently an SVG plot generator for a program I wrote.) The LLM one-shotted it and I left it alone for a long time. Stuff like that is great application for literal vibe coding.

I can't imagine doing it for anything serious though.

mattgreenrocks|8 months ago

As awful as it is, it is entirely understandable: it follows naturally from the claims that LLMs can replace programmers entirely.

As capable as the models are, what matters more is how competent they are perceived to be, and how that is socialized. The hype machine is at deafening levels currently.

jasonthorsness|8 months ago

The absolute number who merged without reviewing was only 24% so maybe there is still hope!

namanyayg|8 months ago

> "65% of developers using AI for refactoring and ~60% for testing, writing, or reviewing say the assistant “misses relevant context."

> "Among those who feel AI degrades quality, 44% blame missing context; yet even among quality champions, 53% still want context improvements."

Is this even true anymore? Doesn't happen to me with claude 4 + claude code.