top | item 44940071

(no title)

Benjammer | 6 months ago

This is the common refrain from the anti-AI crowd, they start by talking about an entire class of problems that already exist in humans-only software engineering, without any context or caveats. And then, when someone points out these problems exist with humans too, they move the goalposts and make it about the "volume" of code and how AI is taking us across some threshold where everything will fall apart.

The telling thing is they never mention this "threshold" in the first place, it's only a response to being called on the bullshit.

discuss

order

bpt3|6 months ago

It's not bullshit. LLMs lower the bar for developers, and increase velocity.

Increasing the quantity of something that is already an issue without automation involved will cause more issues.

That's not moving the goalposts, it's pointing out something that should be obvious to someone with domain experience.

Benjammer|6 months ago

Why is the "threshold" argument never the first thing mentioned? Do you not understand what I'm saying here? Can you explain why the "code slop" argument is _always_ the first thing that people mention, without discussing this threshold?

Every post like this has a tone like they are describing a new phenomenon caused by AI, but it's just a normal professional code quality problem that has always existed.

Consider the difference between these two:

1. AI allows programmers to write sloppy code and commit things without fully checking/testing their code

2. AI greatly increases the speed at which code can be generated, but doesn't nearly improve as much the speed of reviewing code, so we're making software harder to verify

The second is a more accurate picture of what's happening, but comes off much less sensational in a social media post. When people post the 1st example, I discredit them immediately for trying to fear-monger and bait engagement rather than discussing the real problems with AI programming and how to prevent/solve them.