Every time I say I don't see the productivity boost from AI, people always say I'm using the wrong tool, or the wrong model. I use Claude with Sonnet, Zed with either Claude Sonnet 4 or Opus 4.6, Gemini, and ChatGPT 5.2. I use these tools daily and I just don't see it.
The vampire in the room, for me, seems to be feeling like I'm the only person in the room that doesn't believe the hype. Or should I say, being in rooms where nobody seems to care about quality over quantity anymore. Articles like this are part of the problem, not the solution.
Sure they are great for generating some level of code, but the deeper it goes the more it hallucinates. My first or second git commit from these tools is usually closer to a working full solution than the fifth one. The time spent refactoring prompts, testing the code, repeating instructions, refactoring naive architectural decisions and double checking hallucinations when it comes to research take more than the time AI saves me. This isn't free.
A CTO this week told me he can't code or brainstorm anymore without AI. We've had these tools for 4 years, like this guy says - either AI or the competition eats you. So, where is the output? Aside from more AI-tools, what has been released in the past 4 years that makes it obvious looking back that this is when AI became available?
Many engineers get paid a lot of money to write low-complexity code gluing things together and tweaking features according to customer requirements.
When the difficulty of a task is neatly encompassed in a 200 word ticket and the implementation lacks much engineering challenge, AI can pretty reliably write the code-- mediocre code for mediocre challenges.
A huge fraction of the software economy runs on CRUD and some business logic. There just isn't much complexity inherent in any of the feature sets.
I am with you on this, and you can't win, because as soon as you voice this opinion you get overwhelmed with "you dont have the sauce/prompt" opinions which hold an inherent fallacy because they assume you are solving the same problems as them.
I work in GPU programming, so there is no way in hell that JavaScript tools and database wrapper tasks can be on equal terms with generating for example Blackwell tcgen05 warp-scheduled kernels.
I also don’t believe the hype. The boosters always say I would believe if I were to just experience it. But that’s like saying all I have to do is eat a hamburger to experience how nutritious it is for me.
I love hamburgers, and nothing in my experience tells me I shouldn’t eat them every day. But people have studied them over time and I trust that mere personal satisfaction is insufficient basis for calling hamburgers healthy eating.
Applied to AI: How do you know you have “10x’d?” What is your test process? Just reviewing the test process will reverse your productivity! Therefore, to make this claim you probably are going on trust.
I you have 10x the trust, you will believe anything.
I don't understand what including the time of "4 years" does for your arguments here. I don't think anyone is arguing that the usefulness of these AIs for real projects started at GPT 3.5/4. Do you think the capabilities of current AIs are approximately the same as GPT 3.5/4 4 years ago (actually I think SOTA 4 years ago today might have been LaMDA... as GPT 3.5 wasn't out yet)?
> I use these tools daily and I just don't see it.
So why use them if you see no benefit?
You can refuse to use it, it's fine. You can also write your code in notepad.exe, without a linter, and without an Internet connection if you want. Your rodeo
1. Copy-pasting existing working code with small variations. If the intended variation is bigger then it fails to bring productivity gains, because it's almost universally wrong.
2. Exploring unknown code bases. Previously I had to curse my way through code reading sessions, now I can find information easily.
3. Google Search++, e.g. for deciding on tech choices. Needs a lot of hand holding though.
... that's it? Any time I tried doing anything more complex I ended up scrapping the "code" it wrote. It always looked nice though.
I'm an AI hipster, because I was confusing engagement for productivity before it was cool. :P
TFA mentions the slot machine aspect, but I think there are additional facets: The AI Junior Dev creates a kind of parasocial relationship and a sense of punctuated progress. I may still not have finished with X, but I can remember more "stuff" happening in the day, so it must've been more productive, right?
Contrast this to the archetypal "an idea for fixing the algorithm came to me in the shower."
What things (languages etc.) do you work with/on primarily?
I don't know what to say, except that I see a substantial boost. I generally code slowly, but since GPT-5.1 was released, what would've taken me months to do now takes me days.
Admittedly, I work in research, so I'm primarily building prototypes, not products.
> The vampire in the room, for me, seems to be feeling like I'm the only person in the room that doesn't believe the hype. Or should I say, being in rooms where nobody seems to care about quality over quantity anymore.
If in real life you are noticing the majority of peers that you have rapport with tending towards something that you don't understand, it usually isn't a "them" problem.
It's something for you to decide. Are you special? Or are you fundamentally missing something?
I think Yegge hit the nail on the head: he has an addiction. Opus 4.5 is awesome but the type of stuff Yegge has been saying lately has been... questionable, to say the least. The kids call it getting "one-shotted by AI". Using an AI coding assistant should not be causing a person this much distress.
A lot of smart people think they're "too smart" to get addicted. Plenty of tales of booksmart people who tried heroin and ended up stealing their mother's jewelry for a fix a few months later.
I'm a recovering alcoholic. One thing I learned from therapists etc. along the way is that there are certain personality types with high intelligence, and also higher sensitivity to other things, like noise, emotional challenges, and addictive/compulsive behaviour.
It does not surprise me at all that software engineers are falling into an addiction trap with AI.
All this praise for AI.. I honestly don't get it. I have used Opus 4.5 for work and private projects. My experience is that all of the AIs struggle when the project grows. They always find some kind of local minimum where they cannot get out of but tell you this time their solution will work.. but it doesn't. They waste my time with this behaviour enormously. In the end I always have to do it myself.
Maybe when AIs are able to say: "I don't know how this works" or "This doesn't work like that at all." they will be more helpful.
What I use AIs for is searching for stuff in large codebases. Sometimes I don't know the name or the file name and describe to them what I am looking for. Or I let them generate some random task python/bash script. Or use them to find specific things in a file that a regex cannot find. Simple small tasks.
It might well be I am doing it totally wrong.. but I have yet to see a medium to large sized project with maintainable code that was generated by AI.
At what point does the project outgrow the AI in your experience? I have a 70k LOC backend/frontend/database/docker app that Claude still mostly one shots most features/tasks I throw at it. Perhaps, it's not as good remembering all the intertwined side-effects between functionalities/ui's and I have to let it know "in the calendar view, we must hide it as well", but that takes little time/effort.
Does it break down at some point to the extent that it simply does not finish tasks? Honest question as I saw this sentiment stated previously and assumed that sooner or later I'll face it myself but so far I didn't.
I think most of us - if not _all_ of us - don't know how to use these things well yet. And that's OK. It's an entirely new paradigm. We've honed our skills and intuition based on humans building software. Humans make mistakes, sure, but humans have a degree and style of learning and failure patterns we are very familiar with. Humans understand the systems they build to a high degree, this knowledge helps them predict outcomes, and even helps them achieve the goals of their organisation _outside_ writing software.
I kinda keep saying this, but in my experience:
1. You trade the time you'd take to understand the system for time spent testing it.
2. You trade the time you'd take to think about simplifying the system (so you have less code to type) into execution (so you build more in less time).
I really don't know if these are _good_ tradeoffs yet, but it's what I observe. I think it'll take a few years until we truly understand the net effects. The feedback cycles for decisions in software development and business can be really long, several years.
I think the net effects will be positive, not negative. I also think they won't be 10x. But that's just me believing stuff, and it is relatively pointless to argue about beliefs.
> Maybe when AIs are able to say: "I don't know how this works" or "This doesn't work like that at all." they will be more helpful.
Funny you say that, I encountered this in a seemingly simple task. Opus inserted something along the lines of "// TODO: someone with flatbuffers reflection expertise should write this". I actually thought this was better than I anticipated even though the task was specifically related to fbs reflection. And it was because I didn't waste more time and could immediately start rewriting it from scratch.
He's totally correct on the extraction that companies do (always has been). What I kinda disagree is the notion that if a company doesn't go the same path as these others, where everyone is "10x'ing" with AI, that they will suddenly disappear. I really don't think it will work that way.
Yeah, some might if another company/startup goes after their business and they build faster, but building faster doesn't mean your building what people want/need. You might be building bloat (Windows/MS) that no one cares about.
Companies still need to know what to build, not just build something/anything faster.
This has been my experience even before AI. We are a small bootstrapped company, and we have major competitors with free offerings and much more resources than we have (due to VC funding or other backing). While they've achieved some success, they've come nowhere near close to out-competing us.
Paying for AI is much more accessible than getting venture funding, so it's less of a differentiator. They could pay for more AI than we could, but that's already been true with humans, and it hasn't necessarily helped.
Knowing what to build is still the game. As well as the actual business side of business - building a trusted brand, relationships with customers, smart marketing etc.
1. Moats and products have already been built, so it's really about startups that are racing to get products/features to market.
2. I've slowly learnt in my own career that you need to really be careful with picking what you build. It doesn't matter if it's waterfall or a quick agile "experiment", it all takes time and focus. So the more you can design/refine/roadshow/validate your ideas before any code is touched, the better off you'll be.
We're certainly in the middle of a whirlwind of progress. Unfortunately, as AI capabilities increase, so do our expectations.
Suddenly, it's no longer enough to slap something together and call it a project. The better version with more features is just one prompt away. And if you're just a relay for prompts, why not add an agent or two?
I think there won't be a future where the world adapts to a 4-hour day. If your boss or customer also sees you as a relay for prompts, they'll slowly cut you out of the loop, or reduce the amount they pay you. If you instead want to maintain some moat, or build your own money-maker, your working hours will creep up again.
In this environment, I don't see this working out financially for most people. We need to decide which future we want:
1. the one where people can survive (and thrive) without stable employment;
2. the one where we stop automating in favor of stable employment; or
3. the one where only those who keep up stay afloat.
>With a 10x boost, if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value.
I keep hearing about this 10x productivity, but where is it materializing? Most developers at my company use Claude Code, but we don't seem to be shipping new features at ten times the rate. In fact, tickets still take roughly the same amount of time to complete.
Some interesting parts in the text. Some not so interesting ones. The author seems to be thinking that he's a big deal it seems, though - a month ago, I did not know who he is. My work environment has never heard of him (SDE at FAANG). Maybe I'm an outlier and he indeed influences the whole expectation management at companies with his writing, or maybe the success (?) of gastown got to him and he thinks he's bigger than he actually is. Time will tell. In any case, the glorification of oneself in an article like that throws me off for some reason.
He's early Amazon early Google so he's seen two companies super scale. Few people last two paradigm shifts so that's no guarantee of credentials. But at the time he was famous for a specific accidentally-public post that exposed people to the amount that Bezos's influence ramified through Amazon and how his choices contrasted with Google's approach to platforms.
Popular blogger from roughly a decade ago. His rants were frequently cited early in my career. I think he’s fallen off in popularity substantially since.
Am I getting Steve's point? It's a bit like what happened with the agricultural revolution.
A long time ago, food took effort to find, and calories were expensive.
Then we had a breakthrough in cost/per/calories.
We got fat, because we can not moderate our food intake. It is killing us.
A long time ago, coding took effort, and programmer productivity was expensive.
Then we had a breakthrough in cost/per/feature.
Now we are exhausted, because we can not moderate our energy and attention expenditure. It is killing us.
He talks about this new tech for extracting more value from engineers as if it were fracking. When they become impermeable you can now inject a mixed high pressure cocktail of AI to get their internal hydrocarbons flowing. It works but now he feels all pumped out. But the vampire metaphor is hopefully better in that blood replenishes if you don't take too much. A succubus may be an improved comparison, in that a creative seed is extracted and depleted, then refills over a refractory period.
But at scale. Yegge gets close to it in this blog (which actually made me lol, good to see that he is back on form), but shies away from it.
If AI is producing a real productivity boom then we should be seeing a flood of high-quality non-AI related software. If building and shipping software is now easier and faster then all of the software that we have that doesn't quite work right should be displaced by high quality successors. It should be happening right now.
So where is it? Why is all this velocity going into tooling around AI instead? Face it, an entire industry has fallen into the trap of building the automation instead of the product they were trying to automate the production of.
Where is the new high quality C compiler that actually compiles the linux kernel to a measurably higher quality than gcc? If AI is really increasing productivity shouldn't we have that instead of a press-oriented hype flop?
Luckily we work for ourselves in our studio, and I have no one to answer to except my business partner and customers, and tech is my domain. But I have concluded "we already build fast enough." Really how much faster do we need to build? Deployments: automated. Tests: automated. Migrations: automated. Frameworks: complete. Stack: stable. Scaling: solved. OKAY so now with AI we can build "MORE!" More of WHAT exactly? What makes our lives better? What makes our customers happier? How about I just directly feed customer support tickets into Claude and let it rip.
I'm increasingly thinking either people were terrible developers, used shit tools to begin with, or are in a mass psychosis. I certainly feel bad for anyone reporting to "the business guy." He never respected you to begin with, and now he literally thinks "why are you so slow? I can build Airbnb in a weekend."
For someone who previously could achieve nothing, these tools are magical, as they can now achieve something. It feels to them like infinity because their base was 0. That alone will create a lot of things they wouldn't have been able to, good for them. However for people who already know what they're doing, I only feel slightly pushed along some asymptote. My bottlenecks simply are not measured in tokens to screen.
I’m in Steve’s demographic, showing similar symptoms, and I’m as worried as he is about how we’re going to cope.
It’s a matter of opportunity cost. It used to be that when I rested for an hour, I lost an hour of output. Now, when I rest for an hour, I lose what used to be a day of output.
I need to rewire my brain and learn how to split the difference. There’s no point in producing a lot of output if I don’t have time to live.
The idea that you’ll get to enjoy the spoils when you grow up is false. You won’t. Just produce 5x and take some time off every day. You may even be more likely to reflect, and end up producing the right thing.
After at least a century of labour saving devices being produced and widely adopted in all areas of our lives, how much less time do we spend labouring now?
It's real and I've been telling all the people around me who get vested in this sort of exponential growth, to be very wary of the impeding burnout, which spares no soul hungry to get high on information. getting high on information is now a thing, it is not cyberpunk fiction anymore, and burnout is a real threat - VR or not. perhaps one can burn out on tiktok these days.
> Let’s start with the root cause, which is that AI does actually make you more 10x productive, once you learn how.
> But hey, don’t take it from me. Take it from… the Copilot people. According to The Verge and a bunch of other reputable news sources, Microsoft is openly encouraging their employees to use multiple tools, and as a result, Claude Code has rapidly become dominant across engineering at Microsoft.
And what wonders they've achieved with it! Truly innovative enhancements to notepad being witnessed right now! The inability to shut down your computer! I can finally glimpse the 10x productivity I've been missing out on!
The source of the addiction is that an amount of effort is highly likely to result in a fulfilling outcome. That makes you want to make more effort. In the past, a lot of work was very futile, very tedious and often felt hopeless and that made people essentially give up. So this is a very good problem to have. I guess people should monitor their own output and try to pace themselves. But also be grateful that we have these capabilities that allow us to solve so many problems and achieve so many of the things that we want in life.
> all your complaining about AI not being useful for real-world tasks is obsolete... let’s not quibble about the exact productivity boost from AI
No, that's exactly the purpose of this ending up on HN.
> if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value. For someone.
Nope.
> you decide you’re going to impress your employer, and work for 8 hours a day at 10x productivity. You knock it out of the park and make everyone else look terrible by comparison.
Pure junior dev fantasy. Nobody cares about how many hours you "really" work, or what you did as long as you meet their original requirements. They're going to ignore the rest no matter how much you try to talk a big game. This has been true since the beginning of employment.
> In that scenario, your employer captures 100% of the value from you adopting AI. You get nothing, or at any rate, it ain’t gonna be 9x your salary. And everyone hates you now.
Again, nobody cares.
> Congrats, you were just drained by a company. I’ve been drained to the point of burnout several times in my career, even at Google once or twice.
Pointless humblebrag that even we the readers don't care about.
> Now let’s look at Scenario B. You decide instead that you will only work for an hour a day, and aim to keep up with your peers using AI. On that heavily reduced workload, you manage to scrape by, and nobody notices.
This isn't a thing unless you were borderline worthless and junior to begin with.
> In this scenario, your company goes out of business. I’m sorry, but your victory over The Man will be pyrrhic, because The Man is about to be kicked in The Balls, since with everyone slacking off, a competitor will take them out pretty fast.
Hard disagree. Author has clearly never worked outside of a startup or silicon valley where the money is more mature.
Flagged for what is at best extreme ignorance, or more likely ragebait and bad faith hype for a ship that sailed several years ago. I don't know what else to do with these blog posts anymore.
[+] [-] dwedge|28 days ago|reply
The vampire in the room, for me, seems to be feeling like I'm the only person in the room that doesn't believe the hype. Or should I say, being in rooms where nobody seems to care about quality over quantity anymore. Articles like this are part of the problem, not the solution.
Sure they are great for generating some level of code, but the deeper it goes the more it hallucinates. My first or second git commit from these tools is usually closer to a working full solution than the fifth one. The time spent refactoring prompts, testing the code, repeating instructions, refactoring naive architectural decisions and double checking hallucinations when it comes to research take more than the time AI saves me. This isn't free.
A CTO this week told me he can't code or brainstorm anymore without AI. We've had these tools for 4 years, like this guy says - either AI or the competition eats you. So, where is the output? Aside from more AI-tools, what has been released in the past 4 years that makes it obvious looking back that this is when AI became available?
[+] [-] Scaevolus|28 days ago|reply
When the difficulty of a task is neatly encompassed in a 200 word ticket and the implementation lacks much engineering challenge, AI can pretty reliably write the code-- mediocre code for mediocre challenges.
A huge fraction of the software economy runs on CRUD and some business logic. There just isn't much complexity inherent in any of the feature sets.
[+] [-] augment_me|28 days ago|reply
I work in GPU programming, so there is no way in hell that JavaScript tools and database wrapper tasks can be on equal terms with generating for example Blackwell tcgen05 warp-scheduled kernels.
[+] [-] satisfice|28 days ago|reply
I love hamburgers, and nothing in my experience tells me I shouldn’t eat them every day. But people have studied them over time and I trust that mere personal satisfaction is insufficient basis for calling hamburgers healthy eating.
Applied to AI: How do you know you have “10x’d?” What is your test process? Just reviewing the test process will reverse your productivity! Therefore, to make this claim you probably are going on trust.
I you have 10x the trust, you will believe anything.
[+] [-] Davidzheng|28 days ago|reply
[+] [-] amelius|28 days ago|reply
― Roy Amara
[+] [-] thunky|28 days ago|reply
> I use these tools daily and I just don't see it.
So why use them if you see no benefit?
You can refuse to use it, it's fine. You can also write your code in notepad.exe, without a linter, and without an Internet connection if you want. Your rodeo
I don't understand the defensiveness.
[+] [-] exfalso|28 days ago|reply
Here's what I find Claude Code (Opus) useful for:
1. Copy-pasting existing working code with small variations. If the intended variation is bigger then it fails to bring productivity gains, because it's almost universally wrong.
2. Exploring unknown code bases. Previously I had to curse my way through code reading sessions, now I can find information easily.
3. Google Search++, e.g. for deciding on tech choices. Needs a lot of hand holding though.
... that's it? Any time I tried doing anything more complex I ended up scrapping the "code" it wrote. It always looked nice though.
[+] [-] Terr_|28 days ago|reply
TFA mentions the slot machine aspect, but I think there are additional facets: The AI Junior Dev creates a kind of parasocial relationship and a sense of punctuated progress. I may still not have finished with X, but I can remember more "stuff" happening in the day, so it must've been more productive, right?
Contrast this to the archetypal "an idea for fixing the algorithm came to me in the shower."
[+] [-] TomasBM|27 days ago|reply
I don't know what to say, except that I see a substantial boost. I generally code slowly, but since GPT-5.1 was released, what would've taken me months to do now takes me days.
Admittedly, I work in research, so I'm primarily building prototypes, not products.
[+] [-] potsandpans|27 days ago|reply
There is a well studied cognitive bias: https://en.wikipedia.org/wiki/Illusory_superiority. People tend to think they're special.
> The vampire in the room, for me, seems to be feeling like I'm the only person in the room that doesn't believe the hype. Or should I say, being in rooms where nobody seems to care about quality over quantity anymore.
If in real life you are noticing the majority of peers that you have rapport with tending towards something that you don't understand, it usually isn't a "them" problem.
It's something for you to decide. Are you special? Or are you fundamentally missing something?
[+] [-] mjr00|28 days ago|reply
A lot of smart people think they're "too smart" to get addicted. Plenty of tales of booksmart people who tried heroin and ended up stealing their mother's jewelry for a fix a few months later.
[+] [-] davedx|28 days ago|reply
It does not surprise me at all that software engineers are falling into an addiction trap with AI.
[+] [-] fileeditview|28 days ago|reply
Maybe when AIs are able to say: "I don't know how this works" or "This doesn't work like that at all." they will be more helpful.
What I use AIs for is searching for stuff in large codebases. Sometimes I don't know the name or the file name and describe to them what I am looking for. Or I let them generate some random task python/bash script. Or use them to find specific things in a file that a regex cannot find. Simple small tasks.
It might well be I am doing it totally wrong.. but I have yet to see a medium to large sized project with maintainable code that was generated by AI.
[+] [-] Bishonen88|28 days ago|reply
Does it break down at some point to the extent that it simply does not finish tasks? Honest question as I saw this sentiment stated previously and assumed that sooner or later I'll face it myself but so far I didn't.
[+] [-] fhd2|28 days ago|reply
I kinda keep saying this, but in my experience:
1. You trade the time you'd take to understand the system for time spent testing it.
2. You trade the time you'd take to think about simplifying the system (so you have less code to type) into execution (so you build more in less time).
I really don't know if these are _good_ tradeoffs yet, but it's what I observe. I think it'll take a few years until we truly understand the net effects. The feedback cycles for decisions in software development and business can be really long, several years.
I think the net effects will be positive, not negative. I also think they won't be 10x. But that's just me believing stuff, and it is relatively pointless to argue about beliefs.
[+] [-] exfalso|28 days ago|reply
Funny you say that, I encountered this in a seemingly simple task. Opus inserted something along the lines of "// TODO: someone with flatbuffers reflection expertise should write this". I actually thought this was better than I anticipated even though the task was specifically related to fbs reflection. And it was because I didn't waste more time and could immediately start rewriting it from scratch.
[+] [-] blahblaher|28 days ago|reply
Companies still need to know what to build, not just build something/anything faster.
[+] [-] crabmusket|27 days ago|reply
Paying for AI is much more accessible than getting venture funding, so it's less of a differentiator. They could pay for more AI than we could, but that's already been true with humans, and it hasn't necessarily helped.
Knowing what to build is still the game. As well as the actual business side of business - building a trusted brand, relationships with customers, smart marketing etc.
[+] [-] zdc1|28 days ago|reply
1. Moats and products have already been built, so it's really about startups that are racing to get products/features to market.
2. I've slowly learnt in my own career that you need to really be careful with picking what you build. It doesn't matter if it's waterfall or a quick agile "experiment", it all takes time and focus. So the more you can design/refine/roadshow/validate your ideas before any code is touched, the better off you'll be.
[+] [-] mbgerring|28 days ago|reply
[+] [-] mrkeen|28 days ago|reply
Should a strike happen if devs are told to use Claude, or should a strike happen if devs aren't given access to Claude?
[+] [-] bsaul|28 days ago|reply
So yes, please adopt our work ethic and legal framework. It's going to help us tremendously.
[+] [-] TomasBM|28 days ago|reply
Suddenly, it's no longer enough to slap something together and call it a project. The better version with more features is just one prompt away. And if you're just a relay for prompts, why not add an agent or two?
I think there won't be a future where the world adapts to a 4-hour day. If your boss or customer also sees you as a relay for prompts, they'll slowly cut you out of the loop, or reduce the amount they pay you. If you instead want to maintain some moat, or build your own money-maker, your working hours will creep up again.
In this environment, I don't see this working out financially for most people. We need to decide which future we want:
1. the one where people can survive (and thrive) without stable employment;
2. the one where we stop automating in favor of stable employment; or
3. the one where only those who keep up stay afloat.
[+] [-] singularity2001|28 days ago|reply
[+] [-] hebrides|28 days ago|reply
I keep hearing about this 10x productivity, but where is it materializing? Most developers at my company use Claude Code, but we don't seem to be shipping new features at ten times the rate. In fact, tickets still take roughly the same amount of time to complete.
[+] [-] Bishonen88|28 days ago|reply
[+] [-] arjie|28 days ago|reply
https://news.ycombinator.com/item?id=3101876
[+] [-] strstr|28 days ago|reply
[+] [-] koliber|28 days ago|reply
A long time ago, food took effort to find, and calories were expensive. Then we had a breakthrough in cost/per/calories. We got fat, because we can not moderate our food intake. It is killing us.
A long time ago, coding took effort, and programmer productivity was expensive. Then we had a breakthrough in cost/per/feature. Now we are exhausted, because we can not moderate our energy and attention expenditure. It is killing us.
[+] [-] delichon|28 days ago|reply
[+] [-] amoss|28 days ago|reply
But at scale. Yegge gets close to it in this blog (which actually made me lol, good to see that he is back on form), but shies away from it.
If AI is producing a real productivity boom then we should be seeing a flood of high-quality non-AI related software. If building and shipping software is now easier and faster then all of the software that we have that doesn't quite work right should be displaced by high quality successors. It should be happening right now.
So where is it? Why is all this velocity going into tooling around AI instead? Face it, an entire industry has fallen into the trap of building the automation instead of the product they were trying to automate the production of.
Where is the new high quality C compiler that actually compiles the linux kernel to a measurably higher quality than gcc? If AI is really increasing productivity shouldn't we have that instead of a press-oriented hype flop?
[+] [-] rorylaitila|28 days ago|reply
I'm increasingly thinking either people were terrible developers, used shit tools to begin with, or are in a mass psychosis. I certainly feel bad for anyone reporting to "the business guy." He never respected you to begin with, and now he literally thinks "why are you so slow? I can build Airbnb in a weekend."
For someone who previously could achieve nothing, these tools are magical, as they can now achieve something. It feels to them like infinity because their base was 0. That alone will create a lot of things they wouldn't have been able to, good for them. However for people who already know what they're doing, I only feel slightly pushed along some asymptote. My bottlenecks simply are not measured in tokens to screen.
[+] [-] juanre|28 days ago|reply
It’s a matter of opportunity cost. It used to be that when I rested for an hour, I lost an hour of output. Now, when I rest for an hour, I lose what used to be a day of output.
I need to rewire my brain and learn how to split the difference. There’s no point in producing a lot of output if I don’t have time to live.
The idea that you’ll get to enjoy the spoils when you grow up is false. You won’t. Just produce 5x and take some time off every day. You may even be more likely to reflect, and end up producing the right thing.
[+] [-] foo42|28 days ago|reply
[+] [-] larodi|28 days ago|reply
[+] [-] vintermann|28 days ago|reply
[+] [-] nhinck3|28 days ago|reply
> But hey, don’t take it from me. Take it from… the Copilot people. According to The Verge and a bunch of other reputable news sources, Microsoft is openly encouraging their employees to use multiple tools, and as a result, Claude Code has rapidly become dominant across engineering at Microsoft.
And what wonders they've achieved with it! Truly innovative enhancements to notepad being witnessed right now! The inability to shut down your computer! I can finally glimpse the 10x productivity I've been missing out on!
[+] [-] ETH_start|28 days ago|reply
[+] [-] sublinear|28 days ago|reply
No, that's exactly the purpose of this ending up on HN.
> if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value. For someone.
Nope.
> you decide you’re going to impress your employer, and work for 8 hours a day at 10x productivity. You knock it out of the park and make everyone else look terrible by comparison.
Pure junior dev fantasy. Nobody cares about how many hours you "really" work, or what you did as long as you meet their original requirements. They're going to ignore the rest no matter how much you try to talk a big game. This has been true since the beginning of employment.
> In that scenario, your employer captures 100% of the value from you adopting AI. You get nothing, or at any rate, it ain’t gonna be 9x your salary. And everyone hates you now.
Again, nobody cares.
> Congrats, you were just drained by a company. I’ve been drained to the point of burnout several times in my career, even at Google once or twice.
Pointless humblebrag that even we the readers don't care about.
> Now let’s look at Scenario B. You decide instead that you will only work for an hour a day, and aim to keep up with your peers using AI. On that heavily reduced workload, you manage to scrape by, and nobody notices.
This isn't a thing unless you were borderline worthless and junior to begin with.
> In this scenario, your company goes out of business. I’m sorry, but your victory over The Man will be pyrrhic, because The Man is about to be kicked in The Balls, since with everyone slacking off, a competitor will take them out pretty fast.
Hard disagree. Author has clearly never worked outside of a startup or silicon valley where the money is more mature.
Flagged for what is at best extreme ignorance, or more likely ragebait and bad faith hype for a ship that sailed several years ago. I don't know what else to do with these blog posts anymore.
[+] [-] socialcommenter|28 days ago|reply
Let's spare the guy some web traffic.
[+] [-] walthamstow|28 days ago|reply