(no title)
dchuk | 1 month ago
That being said, I think we’re in a weird phase right now where people’s obvious mental health issues are appearing as “hyper productivity” due to the use of these tools to absolutely spam out code that isn’t necessarily broadly coherent but is locally impressive. I’m watching multiple people both publicly and privately clearly breaking down mentally because of the “power” AI is bestowing on them. Their wires are completely crossed when it comes to the value of outputs vs outcomes and they’re espousing generated nonsense as it’s thoughtful insight.
It’s an interesting thing to watch play out.
ben_w|1 month ago
I'd agree, the code "isn’t necessarily broadly coherent but is locally impressive".
However, I've seen some totally successful, even award-winning, human-written projects where I could say the same.
Ages back, I heard a woodworking analogy:
Now, I've never made anything more complex than furniture, so I don't know how well that fit the previous models let alone the current ones… but I've absolutely seen success coming out of bigger balls of mud than the balls of mud I got from letting Claude loose for a bit without oversight.Still, just because you can get success even with sloppy code, doesn't mean I think this is true everywhere. It's not like the award was for industrial equipment or anything, the closest I've come to life-critical code is helping to find and schedule video calls with GPs.
theshrike79|1 month ago
You need to define the problem space so that the agent knows what to do. Basically give it the tools to determine when it's "done" as defined by you.
spmurrayzzz|1 month ago
Folks who have spent years effectively snapping together other people’s APIs like LEGOs (and being well-compensated for it) are understandably blown away by the current state of AI. Compare that to someone writing embedded firmware for device microcontrollers, who would understandably be underwhelmed by the same.
The gap in reactions says more about the nature of the work than it does about the tools themselves.
aaronblohowiak|1 month ago
One datum for you: I recently asked Claude to make a jerk-limited and jerk-derivative-limited motion planner and to use the existing trapezoidal planner as reference for fuzzy-testing various moves (to ensure total pulses sent was correct) and it totally worked. Only a few rounds of guidance to get it to where I wanted to commit it.
aprdm|1 month ago
In fact, I would say I've seen more people who are "OG Coders" excited (and in their >50s) then mid generation
phist_mcgee|1 month ago
yetihehe|1 month ago
meowface|1 month ago
Gas Town is ridiculous and I had to uninstall Beads after seeing it only confuse my agents, but he's not completely insane or a moron. There may be some kernels of good ideas inside of Gas Town which could be extracted out into a better system.
sonnig|1 month ago
petesergeant|1 month ago
I think the kids would call this "getting one-shotted by AI"
GrowingSideways|1 month ago
Surely this was solved with fortran. What changed? I think most people just don't know what program they want.
lordnacho|1 month ago
Previously, if you had an idea of what the program needed to do, you needed to learn a new language. This is so hard that we use language itself as a metaphor: It's hard to learn a new language, only a few people can translate from French to English, for example. Likewise, few people can translate English to Fortran.
Now, you can just think about your program in English, and so long as you actually know what you want, you can get a Fortran program.
The issue is now what it was originally for senior programmers: to decide what to make, not how to make it.
unknown|1 month ago
[deleted]
hahahahhaah|1 month ago
bkolobara|1 month ago
I have a suspicion that extensive use of LLMs can result in damage to your brain. That's why we are seeing so many mental health issues surfacing up, and we are getting a bunch of blog posts about "an agentic coding psychosis".
It could be that llms go from bicycles for the brain to smoking for the brain, once we figure out the long term effects of it.
BrenBarn|1 month ago
That is quite untrue. It is true that people may be slightly slower or less accurate in distinguishing colors that are within a labeled category than those that cross a category boundary, but that's far from saying they can't perceive the difference at all. The latter would imply that, for instance, English speakers cannot distinguish shades of blue or green.
jstanley|1 month ago
Perhaps you mean to say that speakers are unable to name the difference between the colours?
I can easily see differences between (for example) different shades of red. But I can't name them other than "shade of red".
I do happen to subscribe to the Sapir-Whorf hypothesis, in the sense that I think the language you think in constrains your thoughts - but I don't think it is strong enough to prevent you from being able to see different colours.
skywhopper|1 month ago