top | item 45406943

(no title)

ZephyrBlu | 5 months ago

> No it isn't. There's literally nothing about the process that forces you to skip understanding. Any such skips are purely due to the lack of will on the developer's side

This is the whole point. The marginal dev will go to the path of least resistance, which is to skip the understanding and churn out a bunch of code. That is why it's a problem.

You are effectively saying "just be a good dev, there's literally nothing about AI which is stopping you from being a good dev" which is completely correct and also missing the point.

The marginal developer is not going to put in the effort to wield AI in a skillful way. They're going to slop their way through. It is a concern for widespread AI coding, even if it's not a concern for you or your skill peers in particular.

discuss

order

v3xro|5 months ago

To add to the above - I see a parallel to the "if you are a good and diligent developer there is nothing to stop you from writing secure C code" argument. Which is to say - sure, if you also put in extra effort to avoid all the unsafe bits that lead to use-after-free or race conditions it's also possible to write perfect assembly, but in practice we have found that using memory safe languages leads to a huge reduction of safety bugs in production. I think we will find similarly that not using AI will lead to a huge reduction of bugs in production later on when we have enough data to compare to human-generated systems. If that's a pre-existing bias, then so be it.

latentsea|5 months ago

> The marginal developer is not going to put in the effort to wield AI in a skillful way. They're going to slop their way through. It is a concern for widespread AI coding, even if it's not a concern for you or your skill peers in particular.

My mental model of it is that coding with LLMs amplified both what you know and what you don't.

When you know something, you can direct it productively much faster to a desirable outcome than you could on your own.

When you don't know something, the time you normally would have spent researching to build a sufficient understanding to start working on it can be replaced with evaluating the random stuff the LLM comes up with which oftentimes works but not in the way it ought to, though since you can get to some result quickly, the trade-off to do the research feels somehow less worth it.

Probably if you don't have any idea how to accomplish the task you need to cultivate the habit of still doing the research first. Wielding it skillfully is now the task of our industry, so we ought to be developing that skill and cultivating it in our team members.

keeda|5 months ago

I don't think that is a problem with AI, it is a problem with the idea that pure vibe-coding will replace knowledgeable engineers. While there is a loud contingent that hypes up this idea, it will not survive contact with reality.

Purely vibe-coded projects will soon break in unexplainable ways as they grow beyond trivial levels. Once that happens their devs will either need to adapt and learn coding for real or be PIP'd. I can't imagine any such devs lasting long in the current layoff-happy environment. So it seems like a self-correcting problem no?

(Maybe AGI, whatever that is, will change things, but I'm not holding my breath.)

The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.

abalashov|5 months ago

> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.

That's just it. You can only use AI usefully for coding* once you've spent years beating your head against code "the hard way". I'm not sure what that looks like for the next cohort, since they have AI on day 1.

* That is, assuming it's nontrivial.

latentsea|5 months ago

> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.

Learning the ropes looks different now. You used to learn by doing, now you need to learn by directing. In order to know how to direct well, you have to first be knowledgeable. So, if you're starting work in an unfamiliar technology, then a good starting point is read whatever O'Reilly book gives a good overview, so that you understand the landscape of what's possible with the tool and can spot when the LLM is doing (now) obvious bullshit.

You can't just Yolo it for shit you don't know and get good results, but if you build a foundation first through reading, you will do a lot better.

ZephyrBlu|5 months ago

On vibe coding being self-correcting, I would point to the growing number of companies mandating usage of AI and the quote "the market can stay irrational longer than you can stay solvent". Companies routinely burn millions of dollars on irrational endeavours for years. AI has been promised as an insane productivity booster.

I wouldn't expect things to calm down for a while, even if real-life results are worse. You can make excuses for underperformance of these things for a very long time, especially if the CEO or other executives are invested.

> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real

I hate to say it but that's never going to happen :/