top | item 46675746

(no title)

dchuk | 1 month ago

I’m very bought in to the idea that raw coding is now a solved problem with the current models and agentic harnesses. Let alone what’s coming in the near term.

That being said, I think we’re in a weird phase right now where people’s obvious mental health issues are appearing as “hyper productivity” due to the use of these tools to absolutely spam out code that isn’t necessarily broadly coherent but is locally impressive. I’m watching multiple people both publicly and privately clearly breaking down mentally because of the “power” AI is bestowing on them. Their wires are completely crossed when it comes to the value of outputs vs outcomes and they’re espousing generated nonsense as it’s thoughtful insight.

It’s an interesting thing to watch play out.

discuss

order

ben_w|1 month ago

Mm.

I'd agree, the code "isn’t necessarily broadly coherent but is locally impressive".

However, I've seen some totally successful, even award-winning, human-written projects where I could say the same.

Ages back, I heard a woodworking analogy:

  LLM code is like MDF. Really useful for cheap furniture, massively cheaper than solid wood, but it would be a mistake to use it as a structural element in a house.
Now, I've never made anything more complex than furniture, so I don't know how well that fit the previous models let alone the current ones… but I've absolutely seen success coming out of bigger balls of mud than the balls of mud I got from letting Claude loose for a bit without oversight.

Still, just because you can get success even with sloppy code, doesn't mean I think this is true everywhere. It's not like the award was for industrial equipment or anything, the closest I've come to life-critical code is helping to find and schedule video calls with GPs.

theshrike79|1 month ago

"Without oversight" is the key here.

You need to define the problem space so that the agent knows what to do. Basically give it the tools to determine when it's "done" as defined by you.

spmurrayzzz|1 month ago

This has also been an interesting social experiment in that we get to see what work people think is actually impressive vs trivial.

Folks who have spent years effectively snapping together other people’s APIs like LEGOs (and being well-compensated for it) are understandably blown away by the current state of AI. Compare that to someone writing embedded firmware for device microcontrollers, who would understandably be underwhelmed by the same.

The gap in reactions says more about the nature of the work than it does about the tools themselves.

aaronblohowiak|1 month ago

>Compare that to someone writing embedded firmware for device microcontrollers, who would understandably be underwhelmed by the same.

One datum for you: I recently asked Claude to make a jerk-limited and jerk-derivative-limited motion planner and to use the existing trapezoidal planner as reference for fuzzy-testing various moves (to ensure total pulses sent was correct) and it totally worked. Only a few rounds of guidance to get it to where I wanted to commit it.

aprdm|1 month ago

This is not true. You can see people who are much older and built a lot of the "internet scale" equally excited about it, e.g: freebsd OG developers, Steve himself (who wrote gas town) etc.

In fact, I would say I've seen more people who are "OG Coders" excited (and in their >50s) then mid generation

phist_mcgee|1 month ago

I think this says a lot about yourself and where your prejudices and preferences lie.

yetihehe|1 month ago

If you give every idiot a worldwide heard voice, you will hear every idiot from the whole world. If you give every idiot a tool to make programs, you will see a lot of programs made by idiots.

meowface|1 month ago

Steve Yegge is not an idiot or a bad programmer. Possibly just hypomanic at most. And a good, entertaining writer. https://en.wikipedia.org/wiki/Steve_Yegge

Gas Town is ridiculous and I had to uninstall Beads after seeing it only confuse my agents, but he's not completely insane or a moron. There may be some kernels of good ideas inside of Gas Town which could be extracted out into a better system.

sonnig|1 month ago

Well put. I can't help thinking of this every time I see the 854594th "agent coordination framework" in GitHub. They all look strangely similar, are obviously themselves vibe-coded, and make no real effort to present any type of evidence that they can help development in any way.

petesergeant|1 month ago

> where people’s obvious mental health issues

I think the kids would call this "getting one-shotted by AI"

GrowingSideways|1 month ago

> raw coding is now a solved problem

Surely this was solved with fortran. What changed? I think most people just don't know what program they want.

lordnacho|1 month ago

You no longer have to be very specific about syntax. There's now an AI that can translate your idea into whatever language you want.

Previously, if you had an idea of what the program needed to do, you needed to learn a new language. This is so hard that we use language itself as a metaphor: It's hard to learn a new language, only a few people can translate from French to English, for example. Likewise, few people can translate English to Fortran.

Now, you can just think about your program in English, and so long as you actually know what you want, you can get a Fortran program.

The issue is now what it was originally for senior programmers: to decide what to make, not how to make it.

hahahahhaah|1 month ago

Yeah I am definitely trying to stay off hype and just use the damn tool

bkolobara|1 month ago

There is a lot of research on how words/language influences what we think, and even what we can observe, like the Sapir-Whorf hypothesis. If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

I have a suspicion that extensive use of LLMs can result in damage to your brain. That's why we are seeing so many mental health issues surfacing up, and we are getting a bunch of blog posts about "an agentic coding psychosis".

It could be that llms go from bicycles for the brain to smoking for the brain, once we figure out the long term effects of it.

BrenBarn|1 month ago

> If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

That is quite untrue. It is true that people may be slightly slower or less accurate in distinguishing colors that are within a labeled category than those that cross a category boundary, but that's far from saying they can't perceive the difference at all. The latter would imply that, for instance, English speakers cannot distinguish shades of blue or green.

jstanley|1 month ago

> If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

Perhaps you mean to say that speakers are unable to name the difference between the colours?

I can easily see differences between (for example) different shades of red. But I can't name them other than "shade of red".

I do happen to subscribe to the Sapir-Whorf hypothesis, in the sense that I think the language you think in constrains your thoughts - but I don't think it is strong enough to prevent you from being able to see different colours.

skywhopper|1 month ago

But the color thing is self-evidently untrue. It’s not even hard to talk about. Unless you yourself are colorblind I think that would be obvious?