For me the fatigue is a little different— it’s the constant switching between doing a little bit of work/coding/reviewing and then stopping to wait for the llm to generate something.
The waits are unpredictable length, so you never know if you should wait or switch to a new task. So you just do something to kill a little time while the machine thinks.
You never get into a flow state and you feel worn down from this constant vigilance of waiting for background jobs to finish.
I dont feel more productive, I feel like a lazy babysitter that’s just doing enough to keep the kids from hurting themselves
I know this is a terribly irresponsible and immature suggestion, but what I've been doing is every time I give claude code a request of indeterminate length, I just hit a blunt and chill out. That and sometimes I'll tab into the kind of game that can be picked up and put down on very short notice, here's where I shameless plug for the free and open source game Endless Sky.
For me personally, programming lost most of it's fun many years ago, but with claude code I'm having fun again. It's not the same, but for me personally, at this stage in my life, it's more enjoyable.
That's a type of fatigue that is not new but I hear you, context switching fatigue has increased ten fold with the introduction of agentic AI coding tools. Here are some more types of fatigue that have been increased with the adoption of LLMs in writing code.
There are plenty of articles on review fatigue including https://www.exploravention.com/blogs/soft_arch_agentic_ai/ which I published recently. The focus there is less about the impact on the developer and more about the impact on the organization as letting bugs go to production will trigger the returning to high ceremony releases and release anxiety.
The OP article talks about AI fatigue of which review fatigue is a part. I guess that I would sum up the other parts like this. The agentic AI workflow is so focused on optimizing for productivity that it burns the human out.
The remedy is also not new for office work, take frequent breaks. I would also argue that the human developer should still write some code every now and then, not because the AI cannot do it but because it would slow the process down and allow for the human to recover while still feeling invested.
I joke that I'm on the "Claude Code workout plan" now.
Standing desk, while it's working I do a couple squats or pushups or just wander around the house to stretch my legs. Much more enjoyable than sitting at my desk, hands on keyboard, all day long. And taking my eyes off the screen also makes it easier to think about the next thing.
Moving around does help, but even so, the mental fatigue is real!
Seriously and beyond productivity, flow state was what I liked most about the job. A cup of coffee and noise cancelling headphones and a 2 hour locked in session were when I felt most in love with programming.
I used to lose myself in focused work for hours. That's changed. Now I'm constantly pulled away, and I've noticed the pattern: I send a prompt, wait for the reply, and drift into browsing. Without SelfControl blocking me, I can't seem to resist. I am certainly more productive with LLMs, but I also feel much more tired (and guilty) after a day of work.
I don’t think it’s unreasonable to assume that in 1-2 years inference speed with have increased enough to allow for “real time” prompting where the agent finishes work in a few seconds instead of a couple minutes. That will certainly change our workflows. Seems like we are in the dial-up era currently.
You're supposed to write a detailed spec first (ask the AI for help with that part of the job too!) so that it's less likely to go off track when writing the code. Then just ask it to write the code and switch to something else. Review the result when the work is done. The spec then becomes part of your documentation.
This. It’s the context switching and synchronicity, just like when you are managing a project and go round the table - every touch point risks having to go back and remember a bazillion things, plus in the meantime you lose the flow state.
Isn't that similar to the FSD issues where people cannot engage deeply enough because it's "FSD" but they still have to switch back a little, and sometimes go into crisis to avoid a wreck ?
I try to fix it by having multiple opencode instances running on multiple issues from different projects at the same time, but it feels like I'm just herding robots.
I hope Google has been improving their diffusion model in the background this whole time. Having an agentic system that can spin up diffusion agents for lite tasks would be awesome
For me it honestly matches pretty well. I give it an instruction and go reply to an email, and when I'm back in my IDE I have work (that was done while I was doing something else) to review.
Going back from writing an email to working, versus going back from email to reviewing someone else's work feels harder.
What has worked for me is having multiple agents do different tasks (usually in different projects) and doing something myself that I haven't automated yet.
e.g. managing systems, initiating backups, thinking about how I'll automate my backups, etc.
The list of things I haven't automated is getting shorter, and having LLMs generate something I'm happy to hand the work to has been a big part of it.
as long as it's new I tremendously enjoy binge watching Claude:
I have three tabs open and if one of them is not doing something interesting I just switch to a different channel, and occasionally influenced the narrative
This write-up has good ideas but gives me the "AI-generated reading fatigue." Things that can cleanly be expressed in 1-2 sentences are whole paragraphs, often with examples that seem unnecessary or unrealistic. There are also some wrong claims like below:
> The Hacker News front page alone is enough to give you whiplash. One day it's "Show HN: Autonomous Research Swarm" and the next it's "Ask HN: How will AI swarms coordinate?" Nobody knows. Everyone's building anyway.
These posts got less than 5 upvotes, they didn't make it to home page. And while overall quality of Show HN might have dropped, HN homepage is still quite sane.
The topic is also not something "nobody talks about," it's being discussed even before agentic tools became available: https://hn.algolia.com/?q=AI+fatigue
> If you haven't spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
> Code must not be reviewed by humans
> Following this hypothesis, what C did to assembler, what Java did to C, what Javascript/Python/Perl did to Java, now LLM agents are doing to all programming languages.
(All quoted from actual homepage posts today. Fun game: guess which quote is from which article)
> Things that can cleanly be expressed in 1-2 sentences are whole paragraphs
Perhaps the author just likes to write? I've only just recently started blogging more, but I unexpectedly started to really enjoy writing and am hoping to have my posts be more of a "story". Different people have different writing styles. It's not a problem, it's just that you prefer reading posts that are straight to the point.
The boring and likely answer is that is was just clauded out,”I’m tired chat, look through my last ten days of sessions and write and publish a blog post about why,” but it would be fascinating to discover that the author has actually looked at so much ai output that they just write like this now
> Things that can cleanly be expressed in 1-2 sentences are whole paragraphs
Funny, I don't associate that with AI. I associate it with having to write papers of a specific length in high school. (Though at least those were usually numbers of pages, so you could get a little juice from tweaking margins, line spacing and font size.)
> but gives me the "AI-generated reading fatigue."
Agree. The article could have been summarized into a few paragraphs. Instead, we get unnecessary verbiage that goes on and on in an AI generated frenzy. Like the "organic" label on food items, I can foresee labels on content denoting the kind of human generating the content: "suburbs-raised" "free-lancer" etc.
>These posts got less than 5 upvotes, they didn't make it to home page. And while overall quality of Show HN might have dropped, HN homepage is still quite sane.
top 24 hours is a better way to get sentiment. Here's the top 5 for today (not including this post at #2)
> DoNotNotify is now Open Source
> I am happier writing code by hand
> Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory
> Slop Terrifies Me
> Vouch
real shame, we just missed the politics post at #7.
I'm also getting really annoyed by AI-generated images like this article has that don't really help comprehension, but make the author feel like they're "pro blogging" because god forbid you have two paragraphs in a row without a subhead or an image.
Programmers complaining about AI but then ripping off umpteen illustrators' labor through AI is infuriating.
> Your manager sees you shipping faster, so the expectations adjust. You see yourself shipping faster, so your own expectations adjust. The baseline moves.
This problem has been going on a long time, Helen Keller wrote about this almost 100 years ago:
> The only point I want to make here is this: that it is about time for us to begin using our labor-saving machinery actually to save labor instead of using it to flood the nation haphazardly with surplus goods which clog the channels of trade.
I really feel this. I can make meaningful progress on half a dozen projects in the course of a day now but I end the day exhausted.
I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Decades of intuition about sustainable working practices just got disrupted. It's going to take a while and some discipline to find a good new balance.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
My problem is - before, I'd get ideas, start something, and it would either become immediately obvious it wouldn't be worth the time, or immediately obvious that it wouldn't turn out well / how I thought.
Now, the problem is, everything starts off so incredibly well and goes smoothly... Until it doesn't.
This is real, so im a freelancer, i used this small invoicing platfrom to create invoices for my customers. At "work" im working on accounting systems, and erp-s. So with AI, why would i pay monthly for invoicing when i can build it myself. After i day i had invoicing working. Like the simple thing where you get PDF out. Then i started implementing doube-entry booking. And support different tax systems. And then, but we need a sales part then crm, then warehouse. Then projects to track time and so on. And now i have a full saas that i dont need and im not going to waste time on competing in that market. Now im thinking of puting it as open source.
You write a lot about AI. If this is in your free time why not just take a break? If you are ten times more productive, rest for at least twice as much. I don’t get it.
> they're finding building yet another feature with "just one more prompt" irresistible.
Totally my experience too. One last little thing to make it perfect or something that I decide would be "nice to have" ends up taking so much time in total. Luckily now I can access the same agent session on my phone mobile browser too so I can keep an eye on things even in bed. (Joke but not joke :D)
It reduces the friction of coding tremendously. Coding was usually not the bottleneck but it still took a significant amount of time. Now we get to spend more time on the real bottlenecks. Gathering requirements from end users, deciding what should be built, etc.
I said this a few times here. Tech is never to make the life easier for the worker. It is to make the worker more productive and product more competitive.
Moving from horses to cars did not give you more free time. Moving from telephone to smartphone did not give more fishing time. You just became more mobile, more productive and more reachable.
Author here. Not an anti-AI post. It's about the cognitive cost - faster tasks lead to more tasks, reviewing AI output all day causes decision fatigue, and the tool landscape churns weekly. Wrote about what actually helped. Curious if others are hitting similar walls.
Why did you use an LLM to write/change the words in your blog and your post? It really accentuates the sense of fatigue when I can tell I'm not interacting with a human on the other side of a message.
Some of the points raised in the article resonate with me, but I see a lot of trademark phrases inserted by LLMs ("it's not X, it's Y" being the most obvious). Can you share what was your writing process? How much did you write yourself, whether you used LLM to proofread or write the entire text from bullet points, or maybe not at all?
Great post, I certainly feel you. Not just the anxiety but the need to push myself more and accomplish more now that I have some help. Setting right expectations and what is more practical and not every "AI magic post" is worth the attention, has helped me by not being anxious and with the FOMO.
Executive functioning fatigue. Usually you’re doing this in between applying skills, here it’s always making top level decisions and reasoning about possibilities. You don’t have nearly as much downtime because you don’t have to implement, you go from hard problem to hard problem with little time in between. You’re probably running your prefrontal cortex a lot hotter than usual.
People say AI will make us less intelligent, make certain brain regions shrink, but if it stays like this (and I suspect it won’t, but anyway…) then it’ll just make executive functioning super strong because that’s all you’re doing.
I have a slightly different theory for AI fatigue. Programming essentially comprises three aspects: problem-solving, coding up the solution, and seeing the solution work. We used to have a balance between them that AI agents are disrupting.
Consider that each aspect has different psychological impacts. The first aspect is the hardest, more intense one.
The second one, translating the ideas in your head into code -- where "flow state" typically comes into play -- is mostly a mechanical, somewhat repetitive task, and hence actually meditative. It is relaxing, almost "cozy", in a counter-intuitive way.
The third part, seeing the code work at the end, is a nice little dopamine reward.
Originally we used to shift between these aspects at a much more balanced cadence: we typically spent a lot of time on the stressful problem-solving part, and even longer on the meditative coding stage. The dopamine hits were spread out across longer stretches of time. (Note that this pattern is fractal, because not only is "high-level algorithm design / architecture -> coding" one cycle, the coding stage itself has similar smaller-scale sub-cycles, e.g. "encountering syntax errors / implementation bugs -> fixing them.")
With LLMs this balance has been disrupted. Now that agents are doing most of the coding, we are spending almost all our time in the stressful problem-solving mode, and then directly jumping to the dopamine hits when the agent delivers code that mostly just works. If you're not YOLOing it and do code reviews, those are also intense, as TFA points out. We get almost none of the meditative coding activity in between.
So it's not surprising that oscillating constantly between two intense modes is going to be much more draining. Vibe-coding may actually be the coding equivalent of doom-scrolling.
I think a lot of recent talk about missing the old way of coding is rooted in the loss of that meditative aspect.
Personally I've noticed I prefer a pair-programming approach with AI, which still lets me spend a lot of my time hands-on coding. It may be because this maintains some of that balance and is more sustainable in the long run. However the allure of productivity from wrangling multiple agents is also very, very seductive...
I burned out too. Late last year I was running 4-5 projects in parallel with AI, always wanting every spare second utilized. Put up some amazing output numbers. Then hit a wall.
Since then I've pulled back to 2-3 at a time - thats sustainable for me. But I had to build systems to make it work: larger chunked tasks so I'm not constantly context switching on small stuff, adversarial LLMs where Claude writes and Codex judges so reviews are more solid, tooling to track whether I'm shipping real complexity or just noise. (Thats actually why I built https://gitvelocity.dev - wanted to measure my own output honestly, not just feel productive.)
Two things I've landed on:
First - the "don't context switch" adage needs updating. It was true when switching meant reloading everything in your head. With AI handling more of the implementation, the switching cost is lower. My wife juggles 10 things managing our kids schedules/appointments and shes way better at it than me - not wired differently, just practiced. I think we can build that muscle.
Second - were still measuring productivity the old way. Commits, PRs, lines. AI makes volume trivial so expectations ratchet up. If we measured complexity and value instead, the pressure to churn would ease.
Burnout is real. But some of this is growing pains, not permanent condition.
1. Make long pauses: 1h of work, stop for 30 minutes or more. The productivity gain should leave you more time to rest. Alternatively work just 50% of time, 2h the morning, 2h the evening, instead of 8 hours. Yet trying to deliver more than before.
2. Don't mix N activities. Work in a very focused way in a single project, doing meaningful progresses.
3. Don't be too open-ended in the changes you do just because you can do it in little time now. Do what really matters.
4. When you are away, put an agent in the right rails to reiterate and provide potentially some very good result in terms of code quality, security, speed, testing, ... This increases the productivity without stressing you. When you return back, inspect the results, discard everything is trash, take the gems, if any.
5. Be minimalistic even if you no longer write the code. Prompt the agent (and your AGENT.md file) to be focused, to don't add useless dependencies, nor complexity, to take the line count low, to accept an improvement only the complexity-cost/gain is adequate.
6. Turn your flow into specification writing. Stop and write your specifications even for a long time, without interruptions. This will improve a lot the output of the coding agents. And it is a moment of calm focused work for you.
(1) is not something the typical employee can do, in my experience. They're expected to work eight hours a day. Though I suppose the breaks could be replaced with low effort / brain power work to implement a version of that.
1) Is this for founders, because employees surely cant do this. With new AI surveillance tech, companies are looking over our shoulders even more than before.
My main source of AI fatigue is how it is the main topic anywhere and everywhere I go. I can't visit an art gallery without something pestering me about LLMs.
I loved the section about trying to fight against a system that isn't deterministic.
LLMs because of their nature require constant hand-holding by humans, unless business are willing to make them entirely accountable for the systems/products they produce.
I don't get this sentiment. If you don't want investors to give you any input, don't take money from investors. With a Claude Max subscription, it's cheaper than ever to develop a product entirely by yourself or with a couple of friends, if that's what you prefer to do.
Never, ever have productivity gains improved the lives of those who do the actual work. They only ever enriched the owners of the factories.
But with “AI” the gain is more code getting generated faster. That is the dumbest possible way to measure productivity in software development. Remember, code is a liability. Pumping out 10x the amount of code is not 10x productivity.
AI generates a solution that's functional, but that's at a 70% quality level. But then it's really hard to make changes because it feels horrible to spend 1 hour+ to make minor improvements to something that was generated in a minute.
It also feels a lot worse because it would require context switching and really trying to understand the problem and solution at a deeper level rather than a surface level LGTM.
And if it functionally works, then why bother?
Except that it does matter in the long term as technical debt piles up. At a very fast rate too since we're using AI to generate it.
I personally am a lot less stressed. It helped my mood a lot over the last couple of months. Less worries about forgetting things, about missing problems, about getting started, about planning and prioritizing in solo work. Much less of the "swirling mess" feeling. Context switches are simpler, less drudgery, less friction and pulling my hair out for hours banging against some dumb plumbing and gluing issue or installing stuff from github or configuring stuff on the computer.
We've all seen that if you are interacting with an AI over a lengthy chat, eventually it loses the plot. It gets confused. It appears to me that it's necessary, when coding an AI, to keep its task very limited in terms of the amount of information it needs to complete the task. Even then you still have to check the output very carefully. If it seems to be losing focus, I start a new task to reduce the context window, and focus on something that still needs to be fixed in the previous task.
Personally I'm loving AI for TECHNICAL problems. Case in point... I just had a server crash last night and obviously I need to do a summary on what could have possibly caused the issue. This use to take hours and painfully hours at that. If you ever had to scroll through a Windows event log you know what I'm talking about. But today I just got an idea of just exporting the log and uploading it to Gemini and asking it:
Looking at this Windows event log, the server rebooted unexpected this morning at 4:21am EST, please analyze the log and let me know what could have been the cause of the reboot.
It took Gemini 5 minutes to come back with an analyst and not only that, it asked me for the memory dump that the machine took. I uploaded that as well and it told me that it looks like SentinelOne might have caused the problem and to update the client if possible.
Checking the logs myself, that's exactly what it looks like.
That used to take me HOURS to do and now it took me 30 seconds, took Gemini 10 minutes, but me 30 seconds. That is a game changer if you ask me.
I love my job, but I love doing other things rather than combing over a log trying to figure out why a server rebooted. I just want to know what to do to fix it if it can be fixed.
I get that AI might be giving other people a sour taste, but to me it really has made my job, and the medial tasks that come with it. easier.
I have little to no experience with Windows Server, but at least on Linux, this shouldn’t take hours.
Find the last log entries for the system before the reboot; if they point to a specific application, look at its logs, otherwise just check all of them around that time, filtering by log level. Check metrics as well - did the application[s] stop handling requests prior to the restart (keeping in mind that metrics are aggregations), or was it fine up until it wasn’t?
If there are no smoking guns, a hardware issue is possible, in which case any decent server should have logged that.
> I just want to know what to do to fix it if it can be fixed.
Serious question: how do you plan on training juniors if troubleshooting consists of asking an AI what to do?
I agree with the sentiment. I don‘t code a lot, but AI has sped up things in all fields for which I use AI (or at least the expectation of speed has grown). For me, it’s the context switching but also just the mental load of holding so many projects and ideas together in my head. It somewhat helps that the usable context of LLMs has grown over time so I tend to trust the „memory“ of the AI a bit more to keep track of things and somewhat try to offload stuff from my brain
Obviously AI generated article. And the author hasn't made any attempt to disclose it. Take that into consideration.
Yet, The Machine has good points.
>For someone whose entire career is built on "if it broke, I can find out why," this is deeply unsettling. Not in a dramatic way. In a slow, grinding, background-anxiety way. You can never fully trust the output. You can never fully relax. Every interaction requires vigilance.
> you are collaborating with a probabilistic system, and your brain is wired for deterministic ones. That mismatch is a constant, low-grade source of stress.
Back when I bought my first computer, it was a crappy machine that crashed all the time. (Peak of the fake capacitors plague in 2006). That made me doubt and second guess everything that is usually taken for granted in hardware and software (Like simply booting up). That mindset proved useful latter in my career.
I’m not saying anything new. Andy Hunt and Dave Thomas have written about it in a way better way. I find it to still hold very relevant guidelines.
It seems a better and fuller solution to a lot of these problems is to just stop using AI.
I may be an odd one but I'm refusing to use agents, and just happily coding almost everything myself. I only ask a LLM occasional questions about libraries etc or to write the occasional function. Are there others like me put there?
Engineers that have the audacity to think they can context switch between a dozen different lines of work deserve every ounce of burnout they feel. You're the tech equivalent of wanting to be a Kardashian and you're complicit in the damage being caused to society. No, this isn't hyperbole.
I review all code it ever pushes, primarily because on occasion it nukes thousands of lines for no reason.
When I read the 'what actually helped' it's clearly decision fatigue they are managing.
Here's the real trick, you need to change how you're using AI. Have the AI do the decisions. Treat the AI like it's a new hire dev that doesnt get to push to nothing. That you then have AI review the changes, have AI build tests around, have AI do a security review.
> The reason is simple once you see it, but it took me months to figure out. When each task takes less time, you don't do fewer tasks. You do more tasks.
> AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.
The combination of these two facts is why I'm so glad I quit my job a couple of years ago and started my own business. I'm a one-man show and having so much fun using AI as I run things.
Long term, it definitely feels like AI is going to drive company sizes down and lead to a greater prevalence of SMBs, since they get all the benefits with few of the downsides.
> Here's what I think the real skill of the AI era is. It's not prompt engineering. It's not knowing which model to use. It's not having the perfect workflow.
> It's knowing when to stop.
99% of gamblers stop right before they hit it big.
Apart from the exhaustion of context switching, I believe there is a internal signal that gauges how "fast" things are happening in your life. Stress responses are triggered whenever things are going too fast (as if you were driving in a narrow road at too much speed) and it feels like there is danger since you intuit that a small mistake is gonna have big consequences.
Some people thrive in more stressful situations, because they don't get as aroused in calmness, but everybody has a threshold velocity at which discomfort starts, higher or lower. AI puts us closer to that threshold, for sure.
On the other side, I feel like using AI tools can reduce the cognitive overload of doing a single task, which can be nice. If you're able to work with a tool that's fast enough and just focus on a single task at a time, it feels like it makes things easier. When you try to parallelize that's when things get messier.
There's a negative for that too - cognitive effort is directly correlated with learning, so it means that your own skills start to feel less sharp too as you do that (as the article mentions)
All these tools are can be a big waste of time if you’re an end user dev. It only makes sense if you are investing your time to eventually use that workflow knowledge to make a product.
I only use the free tiers of any particular app. It forces you to really think about you want the tool to do as opposed to treating it as the 'easy' button.
I haven’t hit this yet and now I feel like someone just told me about thorns for the first time while I’m here jogging confidently through the woods with shorts on.
I love AI for setting up my linux desktop, or showing me the correct settings for aws, or debugging a powerbi subscription, or finding out why an integration stopped working. It is great for deploying, quickly creating a bunch of systemd configs, sets up pgbouncer, all this tedious time consuming support and consultant stuff.
> What should this function be named? I didn't care. Where should this config live? I didn't care. My brain was full. Not from writing code - from judging code.
Does it matter anymore? Most good engineering principles are to ensure code is easy to read and maintain by humans. When we no longer are the target audience for that, many such decisions are no longer relevant.
> You're experiencing something real that the industry is aggressively pretending doesn't exist.
I agree with the article and recognize the fatigue, but I have never experienced that the industry is "aggressively pretending it does not exist". It feels like a straw man, but maybe you have examples of this happening.
It kind of takes its toll somehow; because the solution is "done" one it's fed to the AI, there's no time to recover by churning out boilerplate code or other patterns that sits in the spine.
> If you're an engineer who uses AI daily - for design reviews, code generation, debugging, documentation, architecture decisions - and you've noticed that you're somehow more tired than before AI existed, this post is for you.
AI is not good for human health - we have it here.
It seems a better and fuller solution to a lot of these problems is to just stop using AI.
I may be an odd one but I'm refusing to use agents, and just happily coding everything myself. I only ask a LLM occasional questions about libraries etc. Are there others like me put there?
Hi, no one's responded to you after 12 hours so I will.
I don't outright refuse to use LLM's, but I use them as little as possible. I enjoy the act of programming too much to delegate it to a machine.
For awhile now there have been programmers who don't actually enjoy programming and are just in it for the money. This happens because programmer salaries are high and the barrier to entry is relatively low. I could imagine LLMs must feel like a godsend to them.
I keep pushing the ai to do absolutely everything to a fault and instead of spending 10mins to manually correct a mistake the ai made i spend hours adjusting and rerunning the prompt to correct the mistake.
Yeah, I read a zine the other day, where a sociologist warned that the biggest threat isn't that AI destroys jobs, it's that AI is compacting the work per person.
Employers expect more from each employee, because, well, AI is helping them, right?
> So you read every line. And reading code you didn't write, that was generated by a system that doesn't understand your codebase's history or your team's conventions, is exhausting work.
I’ve noticed this strongly on the database side of things. Your average dev’s understanding of SQL is unfortunately shaky at best (which I find baffling; you can learn 95% of what you need in an afternoon, and probably get by from referencing documentation for the rest), and AI usage has made this 10x worse.
It honestly feels unreasonable and unfair to me. By requesting my validation of your planned schema or query that an AI generated, you’re tacitly admitting that a. You know it’s likely that it has problems b. You don’t understand what it’s written, but you’re requesting a review anyway. This is outsourcing the cognitive load that you should be bearing as a normal part of designing software.
What makes it even worse is MySQL, because LLMs seem to consistently think that it can do things that it can’t (or is at least highly unlikely to choose to), like using multiple indices for a single table access. Also, when pushed on issues like this, I’ve seen them make even more serious errors, like suggesting a large composite index which it claimed could be used for both the left-most prefix and right-most prefix. That’s not how a B{-,+}tree works, my dude, and of all things, I would think AI would have rock-solid understanding of DS&A.
I doubt a 3+ years experience would articulate like this, must have been AI written or somebody from his work or mentor did this. At the same time I am able to resonate with this article, good one
Personally, I take a break from AI and write the code myself at least a few times each day. It keeps one intellectually honest about whether or not you really understand what's going on.
When I was in my mid 20s, I interned at a machine shop building automotive parts. In general, the work was pretty easy. I was modifying things via cad, doing dry runs on the cnc machine, loading raw material, and then unloading finished products for processing.
Usually there was a cadence to things that allowed for a decent amount of downtime while the machine was running, but I once got to a job where the machine milled the parts so quickly, that I spent more time loading and unloading parts than anything else. Once I started the first part, I didn't actually rest until all of them were done. I ended up straining my back from the repetitive motion. I was shocked because I was in good shape and I wasn't really moving a significant amount.
If I talk about excessive concern for productivity (or profit) being a problem, certain people will roll their eyes. It's hard to separate a message from the various agendas we perceive around us. Regardless of personal feelings, there will always be a negative fallout for people when there's a sudden inversion in workflow like the one described in this article or the one I experienced during my internship.
I dont have exhaustion as such but an increasing sense of dread, the more incredibly work I achieve, the less valuable I realise it potentially will be due to its low cost effort.
I’ve definitely been feeling that shift too. What have you guys found that helps with this? Any habits you use to avoid the constant context switching and decision fatigue?
I've found small features helps. If I ask the ai "do this", I wait 10 minutes for it to be finished, then another 30 understanding what was written
Instead, I start out broad. "I'm thinking of this feature. Let's talk about what components will go into it and where that code will live... Here are my initial thoughts. Let me know if I get anything wrong..."
This puts it in a conversation mode and gets you on the same page, and best of all it's quick. The llm agent isn't searching files and building, it's just giving general opinions
Once I have a good idea of the architecture, I move on to "write this class. Here is an overview of how I would do it", or point the agent to an existing file with the right conventions in place. Once again, if you've done your conversation doesn't take long. 20-second turnaround.
Then it's time to refine. "Move this function here, refactor that"
If you're going to have to understand the code anyway, read it while your writing it. Line by line, function by function, class by class. "Growing" a feature this way keeps you and the llm in sync. It doesn't go off on tangents. And building incrementally keeps you in the loop and in your flow. You don't have 5 things going at once. You have 1, faster.
I've let agents do their thing for a dozen minutes and usually end up having to rewind the whole thing piece by piece to understand what's actually happening
If you're truly vibe coding, maybe you don't have to read the code and can have a dozen agents on a dozen projects. But if you're making Serious Software, I don't see how you can commit code unseen. And in that case, you're not losing anything working incrementally and reading code as it's written?
When I was doing 5 projects at once trying to use up the $250 of free Claude Code Web credit Anthropic gave out before it expired it was surprisingly exhausting.
The other part that's exhausting is having to rethink your tool chain and workflow on a very regular basis. New models, new tools, new prompting strategies.
funnily that sentence is written very AI-ish. It's a really common pattern of "it's not x, it's y" and specifically the phrase "You're not imagining it"
I think the fatigue is that the technology has been hyped since long before today when it’s actually started to become somewhat useful.
And even today when it’s useful, it’s really most useful for very specific domains like coding.
It’s not been impressive at all with other applications. Just chat with your local AI chat bot when you call customer service.
For example, I watch a YouTube channel where this guy calls up car dealerships to negotiate car deals and some of them have purchased AI receptionist solutions. They’re essentially worse than a simple “press 1 for sales” menu and have essentially zero business value.
Another example, I switched to a cheap phone plan MVNO that uses AI chat as its first line of defense. All it did was act as a natural language search engine for a small selection of FAQ pages, and to actually do anything you needed to find the right button to get a human.
These two examples of technology were not worth the hype. We can blame those businesses all day long but at the end of the day I can’t imagine those businesses are going to be impressed with the results of the tech long term. Those car dealerships won’t sell more cars because of it, my phone plan won’t avoid customer service interactions because of it.
In theory, these AI systems should easily be able to be plugged in to do some basic operations that actually save these businesses from hiring people.
The cellular provider should be able to have the AI chatbot make real adjustments to your account, even if they’re minor.
The car dealership bot should be able to set the customer up in the CMS by collecting basic contact info, and maybe should be able to send a basic quote on a vehicle stock number before negotiations begin.
But in practice, these AI systems aren’t providing significant value to these businesses. Companies like Taco Bell can’t even replace humans taking food orders despite the language capabilities of AI.
The weird thing at the end of the day is that we live in this world where there is this default individual desire to be more "productive." I am always wondering, productive for who, for what?
I know more than most there is some baseline productivity we are always trying to be at, that can sometimes be a target more than a current state. But the way people talk about their AI workflows is different. It's like everyone has become tyranical factory floor managers, pushing ever further for productive gains.
Leave this kind of productivity to the bosses I say! Life is a broader surface than this. We can/should focus on be productive together, but leave your actual life for finer, more sustainable ventures.
> When each task takes less time, you don't do fewer tasks. You do more tasks.
And you're also paid more. Find a job that ask less from you if you are fatigued, not everyone want to sacrifice his personal life for his career. That's choices you got to make but ai doesn't inherently force you to become overworked.
But ... but ... your productivity as an engineer shoots up! You can take on more tasks and ship more!
-- Dumbass Engineering Director who has never written a line of code in their life.
Unfortunately, with these types of software simpleton's making decisions we are going to see way more push for AI usage and thus higher productivity expectations. They cannot wrap their heads around the fact (for starters) that AI is not deterministic so that increases the overhead on testing, security, requirements, integrations etc. making all those productivity gains evaporate. Worse (like the author mentioned), it makes your engineer less creative and more burnt-out.
Let's be honest here. Engineers picked this career broadly for 2 reasons, creativity and money. With AI, the creativity aspect is taken away and you are now more of a tester. As for money, those same dumbass decision makers are now going to view this skillset as a commodity and find people who can easily be trained in to "AI Engineers" for way less money to feed inputs.
I am all for technological evolution and welcome it but this isn't anything like that. It is purely based on profits, shareholders and any but building good, proper software systems. Quality be damned. Profession of Software Development be damned. We will regret it in the future.
This is like people crying how their phones are ruining their lives. Just stop. Take some responsibility and control of your life. I don’t feel exhausted especially after letting LLM hallucinate some code lines. If you do, maybe it is time to re-evaluate your life choices
taking breaks is really something to try and solve in 2026 - to just write regular code, to read, to exercise even. The mind can eventually get overloaded, and there’s no way around proper hygiene.
Absolute middlebrow dismissal incoming, but the real thinking atrophy is writing blog posts about thinking atrophy caused by LLMs using an LLM.
It is getting very hard to continue viewing HN as a place where I want to come and read content others have written when blog posts written largely with ChatGPT are constantly upvoted to the top.
It's not the co-writing process I have a problem with, it's that ChatGPT can turn a shower thought into a 10 minute essay. This whole post could have been four paragraphs. The introduction was clearly written by an intelligent and skilled human, and then by the second half there's "it's not X, it's Y" reframe slop every second sentence.
The writing is too good to be entirely LLM generated, but the prose is awful enough that I'm confident this was a "paste outline into chatgpt and it generates an essay" workflow.
Frustrating world. I'm lambasting OP, but I want him to write, but actually, and not via a lens that turns every cool thought into marketing sludge.
Why do you think author used ChatGPT to write this? It has human imperfections and except this 'The "just one more prompt" trap' I didnt think it was written by a prompt
The way I experience this is through unprecedented amount of feature creep. We don't use AI generated code for all our projects, but in the ones we do, I see a weird anti-pattern settle in: Simply because it's faster than ever before to generate a patch and get it merged, it doesn't mean that merging 50+ commits this week makes sense.
Code and feature still need to experience time and stability in order to achieve maturity. We need to give our end users time to try stuff, to shape their opinions and habits. We need to let everyone on the dev team take the time to update their mental model of the project as patches are merged. Heck, I've seen too many Product Owners incapable of telling you clearly what went in and out of the code over the previous 2 releases, and those are usually a few weeks apart.
Making individual tasks faster should give us more time to think in terms of quality and stability. Instead, people want to add more features more often.
I'm certainly getting tired of the AI slop images and videos. For coding and software development, I'm outright excited (and a little scared) of what I've been able to accomplish with GPT and Claude. I'm a software developer with 25 years experience living in the upper Midwest USA.
I write software professionally and remotely for large boring insurance company, but I'm building a side project of an area of interest using AI tools to assist, and I've created in a couple months a few hours per week what would have taken me a year or more to create. I've read other's comments about having to babysit the AI tools, but that's not so bad.
The little benefit I've noticed using AI tools to "vibecode" is sometimes they come back with solutions that I never would have come up with. ...and then there's the solutions where I click the Undo button and shake my head.
I can’t identify with this at all. I feel far less drained and far less overworked in general. I get the tasks done if not quicker at least with far less mental exertion now than ever before. Do not take me back please.
This author puts to words several thoughts of mine that have been jelling recently. In the end I still seems to miss that AI is totally optional. It's a depressing read because the basic gist is, this AI stuff is weird and depressing and addicting to the point of losing productivity, but I have to use it so here are some ways to try and counter those negatives.
Dude! You don't have to use it!! Just write code yourself. Do a web search if you are stuck, the information is still out there on stack overflow and reddit. Maybe us kagi instead of Google, but the old ways still work really well.
I've been building https://roborev.io/ (continuous background code review for agents) essentially as a cope to supervise the poor quality of the agents' work, since my agents write much more code than I can possible review directly or QA thoroughly. I think we'll see a bunch of interesting new tools to help alleviate the cognitive burden of supervising their work output.
Am I the only one that primarily learns by doing? If I'm not writing code, only doing code reviews, my familiarity and proficiency with the codebase gradually goes down, even if I'm still reading all the code being written. There's a lot of value lost when you cut out the part where you have to critically think up the solution rather than just reviewing it.
I can’t take seriously an article on AI written so obviously using AI. The unmistakable (lack of) style. If the author is not even aware that the rhetoric that GPT produces is unvaried and predictable, how can I believe the author really means what they write? Slop is cancer
I'm a big AI booster, but I'm so sick of how crazy hype has gotten. Claude Cowork? Game changer! Ralph? Nothing will ever be the same. LOLClaw? Singularity, I welcome our new AI overlords.
AI cope regarging "you can still carefully design, AI wont take away your creative control or care for the craft" is the new "there's no problem with C's safety and design, devs just need to pay more attention while coding" or the "I'm not alcoholic, I can quit anytime" of 2026...
I’m shocked that the obvious analysis hasn’t come up: this is more disingenuous talk Karpathy-style, designed to awaken feelings of FOMO from someone who’s not developing normal software with A.I. but is selling A.I. programming tools.
> I shipped more code last quarter than any quarter in my career. I also felt more drained than any quarter in my career. These two facts are not unrelated.
I’m gonna be generous (and try not to be pedantic) and assume that more-code means more bugfixes and features (and whatnot) and not more LOC.
Your manager has mandated X tokens a day or you feel you have to use it to keep up. Huh?
> I build AI agent infrastructure for a living. I'm one of the core maintainers of OpenFGA (CNCF Incubating), I built agentic-authz for agent authorization, I built Distill for context deduplication, I shipped MCP servers. I'm not someone who dabbles with AI on the side. I'm deep in it. I build the tools that other engineers use to make AI agents work in production.
Oh.
> If you're an engineer who uses AI daily - for design reviews, code generation, debugging, documentation, architecture decisions - and you've noticed that you're somehow more tired than before AI existed, this post is for you. You're not imagining it. You're not weak. You're experiencing something real that the industry is aggressively pretending doesn't exist. And if someone who builds agent infrastructure full-time can burn out on AI, it can happen to anyone.
This is what ChatGPT writes to me when I ask “but why is that the case”.
1. No, you are not wrong
2. You don’t have <bad character trait>
3. You are experiencing something real
> I want to talk about it honestly. Not the "AI is amazing and here's my workflow" version. The real version.
And it will be unfiltered. Raw. And we will conclude with how to go on with our Flintstone Engineering[2] but with some platitudes about self-care.
> The real skill ... It's knowing when to stop.
Stop prompting? Like, for
> Knowing when the AI output is good enough.
Ah. We do short prompting sessions instead.
> Knowing that your brain is a finite resource and that protecting it is not laziness - it's engineering.
Indeed it’s not this thing. It’s that—thing.
> AI is the most powerful tool I've ever used. It's also the most draining. Both things are true. The engineers who thrive in this era won't be the ones who use AI the most. They'll be the ones who use it the most wisely.
Of course we will keep using “the most powerful tool I’ve ever used”. But we will do it wisely.
What’s to worry about? You can use ChatGPT as your therapist now.
parpfish|21 days ago
The waits are unpredictable length, so you never know if you should wait or switch to a new task. So you just do something to kill a little time while the machine thinks.
You never get into a flow state and you feel worn down from this constant vigilance of waiting for background jobs to finish.
I dont feel more productive, I feel like a lazy babysitter that’s just doing enough to keep the kids from hurting themselves
mikkupikku|21 days ago
For me personally, programming lost most of it's fun many years ago, but with claude code I'm having fun again. It's not the same, but for me personally, at this stage in my life, it's more enjoyable.
gengstrand|21 days ago
There are plenty of articles on review fatigue including https://www.exploravention.com/blogs/soft_arch_agentic_ai/ which I published recently. The focus there is less about the impact on the developer and more about the impact on the organization as letting bugs go to production will trigger the returning to high ceremony releases and release anxiety.
The OP article talks about AI fatigue of which review fatigue is a part. I guess that I would sum up the other parts like this. The agentic AI workflow is so focused on optimizing for productivity that it burns the human out.
The remedy is also not new for office work, take frequent breaks. I would also argue that the human developer should still write some code every now and then, not because the AI cannot do it but because it would slow the process down and allow for the human to recover while still feeling invested.
alex_c|21 days ago
Standing desk, while it's working I do a couple squats or pushups or just wander around the house to stretch my legs. Much more enjoyable than sitting at my desk, hands on keyboard, all day long. And taking my eyes off the screen also makes it easier to think about the next thing.
Moving around does help, but even so, the mental fatigue is real!
ericmcer|21 days ago
rubslopes|21 days ago
amelius|21 days ago
LLMs force me to context switch all the time.
jeremyjacob|21 days ago
iterateoften|21 days ago
Probably more stress if I’m on battery and don’t want the laptop to sleep or WiFi to get interrupted.
zozbot234|21 days ago
rcarmo|21 days ago
agumonkey|21 days ago
xnx|21 days ago
Edit: Looks like plenty of people have observed this: https://www.reddit.com/r/xkcd/comments/12dpnlk/compiling_upd...
wouldbecouldbe|21 days ago
JeremyNT|21 days ago
I try to fix it by having multiple opencode instances running on multiple issues from different projects at the same time, but it feels like I'm just herding robots.
Maybe I'm ready for gastown..
WarmWash|21 days ago
Davidzheng|21 days ago
Fire-Dragon-DoL|21 days ago
mavamaarten|21 days ago
Going back from writing an email to working, versus going back from email to reviewing someone else's work feels harder.
9rx|21 days ago
the-grump|21 days ago
e.g. managing systems, initiating backups, thinking about how I'll automate my backups, etc.
The list of things I haven't automated is getting shorter, and having LLMs generate something I'm happy to hand the work to has been a big part of it.
aixpert|21 days ago
jwarden|21 days ago
ReptileMan|21 days ago
SecretDreams|21 days ago
pylua|21 days ago
SomeHacker44|21 days ago
likeajr|21 days ago
[deleted]
z0ltan|21 days ago
[deleted]
barishnamazov|21 days ago
> The Hacker News front page alone is enough to give you whiplash. One day it's "Show HN: Autonomous Research Swarm" and the next it's "Ask HN: How will AI swarms coordinate?" Nobody knows. Everyone's building anyway.
These posts got less than 5 upvotes, they didn't make it to home page. And while overall quality of Show HN might have dropped, HN homepage is still quite sane.
The topic is also not something "nobody talks about," it's being discussed even before agentic tools became available: https://hn.algolia.com/?q=AI+fatigue
raincole|21 days ago
Those Show HN posts aren't the insane part. Insane part is like:
> Thank you, OpenClaw. Thank you, AGI—for me, it’s already here.
> If you haven't spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
> Code must not be reviewed by humans
> Following this hypothesis, what C did to assembler, what Java did to C, what Javascript/Python/Perl did to Java, now LLM agents are doing to all programming languages.
(All quoted from actual homepage posts today. Fun game: guess which quote is from which article)
jairuhme|21 days ago
Perhaps the author just likes to write? I've only just recently started blogging more, but I unexpectedly started to really enjoy writing and am hoping to have my posts be more of a "story". Different people have different writing styles. It's not a problem, it's just that you prefer reading posts that are straight to the point.
StilesCrisis|21 days ago
QuadmasterXLII|21 days ago
idopmstuff|21 days ago
Funny, I don't associate that with AI. I associate it with having to write papers of a specific length in high school. (Though at least those were usually numbers of pages, so you could get a little juice from tweaking margins, line spacing and font size.)
bwfan123|21 days ago
Agree. The article could have been summarized into a few paragraphs. Instead, we get unnecessary verbiage that goes on and on in an AI generated frenzy. Like the "organic" label on food items, I can foresee labels on content denoting the kind of human generating the content: "suburbs-raised" "free-lancer" etc.
jitbit|21 days ago
"You're not imagining it."
"But my days got harder. Not easier. Harder."
"Now?" as the paragraph opener
"Why? No reason." as the paragraph opener
Nice try OP, submitting your own post to HN.
pcurve|21 days ago
unknown|21 days ago
[deleted]
johnnyanmac|21 days ago
top 24 hours is a better way to get sentiment. Here's the top 5 for today (not including this post at #2)
> DoNotNotify is now Open Source
> I am happier writing code by hand
> Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory
> Slop Terrifies Me
> Vouch
real shame, we just missed the politics post at #7.
spopejoy|21 days ago
Programmers complaining about AI but then ripping off umpteen illustrators' labor through AI is infuriating.
tangotaylor|21 days ago
This problem has been going on a long time, Helen Keller wrote about this almost 100 years ago:
> The only point I want to make here is this: that it is about time for us to begin using our labor-saving machinery actually to save labor instead of using it to flood the nation haphazardly with surplus goods which clog the channels of trade.
https://www.theatlantic.com/magazine/archive/1932/08/put-you...
simonw|21 days ago
I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Decades of intuition about sustainable working practices just got disrupted. It's going to take a while and some discipline to find a good new balance.
onlyrealcuzzo|21 days ago
My problem is - before, I'd get ideas, start something, and it would either become immediately obvious it wouldn't be worth the time, or immediately obvious that it wouldn't turn out well / how I thought.
Now, the problem is, everything starts off so incredibly well and goes smoothly... Until it doesn't.
disiplus|21 days ago
keybored|21 days ago
krackers|21 days ago
> another feature with "just one more prompt" irresistible
The llm agents as slot machines analogy seems to be getting stronger...
dizhn|21 days ago
Totally my experience too. One last little thing to make it perfect or something that I decide would be "nice to have" ends up taking so much time in total. Luckily now I can access the same agent session on my phone mobile browser too so I can keep an eye on things even in bed. (Joke but not joke :D)
nonethewiser|21 days ago
bob_theslob646|21 days ago
It reminds me of why people wanted financial markets to be 24/7.
We as a society should probably take a look at that otherwise it may lead to burnout in a not so small percentage of people
zkmon|21 days ago
Moving from horses to cars did not give you more free time. Moving from telephone to smartphone did not give more fishing time. You just became more mobile, more productive and more reachable.
xnx|21 days ago
sidk24|21 days ago
bonsai_spool|21 days ago
PaulHoule|21 days ago
https://scienceintegritydigest.com/2024/02/15/the-rat-with-t...
tjstarak|21 days ago
srameshc|21 days ago
ai_sloppy_toppy|21 days ago
geetee|21 days ago
mrspacejam|21 days ago
Chance-Device|21 days ago
People say AI will make us less intelligent, make certain brain regions shrink, but if it stays like this (and I suspect it won’t, but anyway…) then it’ll just make executive functioning super strong because that’s all you’re doing.
booleandilemma|21 days ago
keeda|21 days ago
Consider that each aspect has different psychological impacts. The first aspect is the hardest, more intense one.
The second one, translating the ideas in your head into code -- where "flow state" typically comes into play -- is mostly a mechanical, somewhat repetitive task, and hence actually meditative. It is relaxing, almost "cozy", in a counter-intuitive way.
The third part, seeing the code work at the end, is a nice little dopamine reward.
Originally we used to shift between these aspects at a much more balanced cadence: we typically spent a lot of time on the stressful problem-solving part, and even longer on the meditative coding stage. The dopamine hits were spread out across longer stretches of time. (Note that this pattern is fractal, because not only is "high-level algorithm design / architecture -> coding" one cycle, the coding stage itself has similar smaller-scale sub-cycles, e.g. "encountering syntax errors / implementation bugs -> fixing them.")
With LLMs this balance has been disrupted. Now that agents are doing most of the coding, we are spending almost all our time in the stressful problem-solving mode, and then directly jumping to the dopamine hits when the agent delivers code that mostly just works. If you're not YOLOing it and do code reviews, those are also intense, as TFA points out. We get almost none of the meditative coding activity in between.
So it's not surprising that oscillating constantly between two intense modes is going to be much more draining. Vibe-coding may actually be the coding equivalent of doom-scrolling.
I think a lot of recent talk about missing the old way of coding is rooted in the loss of that meditative aspect.
Personally I've noticed I prefer a pair-programming approach with AI, which still lets me spend a lot of my time hands-on coding. It may be because this maintains some of that balance and is more sustainable in the long run. However the allure of productivity from wrangling multiple agents is also very, very seductive...
chuboy|21 days ago
Since then I've pulled back to 2-3 at a time - thats sustainable for me. But I had to build systems to make it work: larger chunked tasks so I'm not constantly context switching on small stuff, adversarial LLMs where Claude writes and Codex judges so reviews are more solid, tooling to track whether I'm shipping real complexity or just noise. (Thats actually why I built https://gitvelocity.dev - wanted to measure my own output honestly, not just feel productive.)
Two things I've landed on:
First - the "don't context switch" adage needs updating. It was true when switching meant reloading everything in your head. With AI handling more of the implementation, the switching cost is lower. My wife juggles 10 things managing our kids schedules/appointments and shes way better at it than me - not wired differently, just practiced. I think we can build that muscle.
Second - were still measuring productivity the old way. Commits, PRs, lines. AI makes volume trivial so expectations ratchet up. If we measured complexity and value instead, the pressure to churn would ease.
Burnout is real. But some of this is growing pains, not permanent condition.
antirez|21 days ago
2. Don't mix N activities. Work in a very focused way in a single project, doing meaningful progresses.
3. Don't be too open-ended in the changes you do just because you can do it in little time now. Do what really matters.
4. When you are away, put an agent in the right rails to reiterate and provide potentially some very good result in terms of code quality, security, speed, testing, ... This increases the productivity without stressing you. When you return back, inspect the results, discard everything is trash, take the gems, if any.
5. Be minimalistic even if you no longer write the code. Prompt the agent (and your AGENT.md file) to be focused, to don't add useless dependencies, nor complexity, to take the line count low, to accept an improvement only the complexity-cost/gain is adequate.
6. Turn your flow into specification writing. Stop and write your specifications even for a long time, without interruptions. This will improve a lot the output of the coding agents. And it is a moment of calm focused work for you.
fhd2|21 days ago
falloutx|21 days ago
gherkinnn|21 days ago
CurleighBraces|21 days ago
LLMs because of their nature require constant hand-holding by humans, unless business are willing to make them entirely accountable for the systems/products they produce.
tempodox|21 days ago
Do you hold the dice accountable when you lose at the craps table?
falcor84|21 days ago
onraglanroad|21 days ago
That's the way society is set up.
falcor84|21 days ago
tempodox|21 days ago
But with “AI” the gain is more code getting generated faster. That is the dumbest possible way to measure productivity in software development. Remember, code is a liability. Pumping out 10x the amount of code is not 10x productivity.
preommr|21 days ago
AI generates a solution that's functional, but that's at a 70% quality level. But then it's really hard to make changes because it feels horrible to spend 1 hour+ to make minor improvements to something that was generated in a minute.
It also feels a lot worse because it would require context switching and really trying to understand the problem and solution at a deeper level rather than a surface level LGTM.
And if it functionally works, then why bother?
Except that it does matter in the long term as technical debt piles up. At a very fast rate too since we're using AI to generate it.
bonoboTP|21 days ago
Its a million little quality of life stuff.
VikRubenfeld|21 days ago
unknown|21 days ago
[deleted]
thrownaway561|21 days ago
Looking at this Windows event log, the server rebooted unexpected this morning at 4:21am EST, please analyze the log and let me know what could have been the cause of the reboot.
It took Gemini 5 minutes to come back with an analyst and not only that, it asked me for the memory dump that the machine took. I uploaded that as well and it told me that it looks like SentinelOne might have caused the problem and to update the client if possible.
Checking the logs myself, that's exactly what it looks like.
That used to take me HOURS to do and now it took me 30 seconds, took Gemini 10 minutes, but me 30 seconds. That is a game changer if you ask me.
I love my job, but I love doing other things rather than combing over a log trying to figure out why a server rebooted. I just want to know what to do to fix it if it can be fixed.
I get that AI might be giving other people a sour taste, but to me it really has made my job, and the medial tasks that come with it. easier.
sgarland|21 days ago
Find the last log entries for the system before the reboot; if they point to a specific application, look at its logs, otherwise just check all of them around that time, filtering by log level. Check metrics as well - did the application[s] stop handling requests prior to the restart (keeping in mind that metrics are aggregations), or was it fine up until it wasn’t?
If there are no smoking guns, a hardware issue is possible, in which case any decent server should have logged that.
> I just want to know what to do to fix it if it can be fixed.
Serious question: how do you plan on training juniors if troubleshooting consists of asking an AI what to do?
thesumofall|21 days ago
highspeedbus|21 days ago
Yet, The Machine has good points.
>For someone whose entire career is built on "if it broke, I can find out why," this is deeply unsettling. Not in a dramatic way. In a slow, grinding, background-anxiety way. You can never fully trust the output. You can never fully relax. Every interaction requires vigilance.
> you are collaborating with a probabilistic system, and your brain is wired for deterministic ones. That mismatch is a constant, low-grade source of stress.
Back when I bought my first computer, it was a crappy machine that crashed all the time. (Peak of the fake capacitors plague in 2006). That made me doubt and second guess everything that is usually taken for granted in hardware and software (Like simply booting up). That mindset proved useful latter in my career.
I’m not saying anything new. Andy Hunt and Dave Thomas have written about it in a way better way. I find it to still hold very relevant guidelines.
https://www.khoury.northeastern.edu/home/lieber/courses/csg1...
>Think! About Your Work
>Critically Analyze What You Read and Hear
layer8|21 days ago
Kiro|21 days ago
pavel_lishin|21 days ago
adamddev1|21 days ago
I may be an odd one but I'm refusing to use agents, and just happily coding almost everything myself. I only ask a LLM occasional questions about libraries etc or to write the occasional function. Are there others like me put there?
geetee|21 days ago
incomingpain|21 days ago
I review all code it ever pushes, primarily because on occasion it nukes thousands of lines for no reason.
When I read the 'what actually helped' it's clearly decision fatigue they are managing.
Here's the real trick, you need to change how you're using AI. Have the AI do the decisions. Treat the AI like it's a new hire dev that doesnt get to push to nothing. That you then have AI review the changes, have AI build tests around, have AI do a security review.
idopmstuff|21 days ago
> AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.
The combination of these two facts is why I'm so glad I quit my job a couple of years ago and started my own business. I'm a one-man show and having so much fun using AI as I run things.
Long term, it definitely feels like AI is going to drive company sizes down and lead to a greater prevalence of SMBs, since they get all the benefits with few of the downsides.
xnx|21 days ago
torlok|21 days ago
> It's knowing when to stop.
99% of gamblers stop right before they hit it big.
paufernandez|21 days ago
Some people thrive in more stressful situations, because they don't get as aroused in calmness, but everybody has a threshold velocity at which discomfort starts, higher or lower. AI puts us closer to that threshold, for sure.
cs702|21 days ago
Managing people has always been emotionally and psychologically exhausting.
Managing AI entities can be even more taxing. They're not human beings.
SoftTalker|21 days ago
downboots|21 days ago
jezzamon|21 days ago
On the other side, I feel like using AI tools can reduce the cognitive overload of doing a single task, which can be nice. If you're able to work with a tool that's fast enough and just focus on a single task at a time, it feels like it makes things easier. When you try to parallelize that's when things get messier.
There's a negative for that too - cognitive effort is directly correlated with learning, so it means that your own skills start to feel less sharp too as you do that (as the article mentions)
lvl155|21 days ago
dankobgd|21 days ago
Kiro|21 days ago
PLenz|21 days ago
luxuryballs|21 days ago
psychoslave|21 days ago
nurettin|20 days ago
Coding, not so much.
mrspacejam|21 days ago
orangepanda|21 days ago
Does it matter anymore? Most good engineering principles are to ensure code is easy to read and maintain by humans. When we no longer are the target audience for that, many such decisions are no longer relevant.
mrits|21 days ago
I also don't understand why you assume what the AI generates is more readable by AI than human generated code.
unknown|21 days ago
[deleted]
janwillemb|21 days ago
I agree with the article and recognize the fatigue, but I have never experienced that the industry is "aggressively pretending it does not exist". It feels like a straw man, but maybe you have examples of this happening.
Flundstrom2|20 days ago
It kind of takes its toll somehow; because the solution is "done" one it's fed to the AI, there's no time to recover by churning out boilerplate code or other patterns that sits in the spine.
It's more "full steam ahead, always".
unknown|21 days ago
[deleted]
shevy-java|21 days ago
AI is not good for human health - we have it here.
adamddev1|21 days ago
I may be an odd one but I'm refusing to use agents, and just happily coding everything myself. I only ask a LLM occasional questions about libraries etc. Are there others like me put there?
booleandilemma|21 days ago
I don't outright refuse to use LLM's, but I use them as little as possible. I enjoy the act of programming too much to delegate it to a machine.
For awhile now there have been programmers who don't actually enjoy programming and are just in it for the money. This happens because programmer salaries are high and the barrier to entry is relatively low. I could imagine LLMs must feel like a godsend to them.
AnotherGoodName|21 days ago
I keep pushing the ai to do absolutely everything to a fault and instead of spending 10mins to manually correct a mistake the ai made i spend hours adjusting and rerunning the prompt to correct the mistake.
I’m learning how to prompt well at least.
otabdeveloper4|21 days ago
Prompting isn't a real skill and you're not learning anything.
"Claude 4.5 Sonnet operator" is not a job description.
k__|21 days ago
Employers expect more from each employee, because, well, AI is helping them, right?
sgarland|21 days ago
I’ve noticed this strongly on the database side of things. Your average dev’s understanding of SQL is unfortunately shaky at best (which I find baffling; you can learn 95% of what you need in an afternoon, and probably get by from referencing documentation for the rest), and AI usage has made this 10x worse.
It honestly feels unreasonable and unfair to me. By requesting my validation of your planned schema or query that an AI generated, you’re tacitly admitting that a. You know it’s likely that it has problems b. You don’t understand what it’s written, but you’re requesting a review anyway. This is outsourcing the cognitive load that you should be bearing as a normal part of designing software.
What makes it even worse is MySQL, because LLMs seem to consistently think that it can do things that it can’t (or is at least highly unlikely to choose to), like using multiple indices for a single table access. Also, when pushed on issues like this, I’ve seen them make even more serious errors, like suggesting a large composite index which it claimed could be used for both the left-most prefix and right-most prefix. That’s not how a B{-,+}tree works, my dude, and of all things, I would think AI would have rock-solid understanding of DS&A.
tossandthrow|21 days ago
With Ai, the situations where you know what you are building and you get into flow are fewer and further apart.
So much more time is thinking about the domain, and the problem to solve.
And that is exhausting.
throwaw12|21 days ago
I like conductor.build, they are doing amazing job, but I don't want to give up my freedom and get heavily reliant on closed source
ithiru|21 days ago
calibas|21 days ago
clejack|21 days ago
Usually there was a cadence to things that allowed for a decent amount of downtime while the machine was running, but I once got to a job where the machine milled the parts so quickly, that I spent more time loading and unloading parts than anything else. Once I started the first part, I didn't actually rest until all of them were done. I ended up straining my back from the repetitive motion. I was shocked because I was in good shape and I wasn't really moving a significant amount.
If I talk about excessive concern for productivity (or profit) being a problem, certain people will roll their eyes. It's hard to separate a message from the various agendas we perceive around us. Regardless of personal feelings, there will always be a negative fallout for people when there's a sudden inversion in workflow like the one described in this article or the one I experienced during my internship.
ionwake|21 days ago
I dont have exhaustion as such but an increasing sense of dread, the more incredibly work I achieve, the less valuable I realise it potentially will be due to its low cost effort.
unknown|21 days ago
[deleted]
iryna_kondr|21 days ago
jbeninger|21 days ago
Instead, I start out broad. "I'm thinking of this feature. Let's talk about what components will go into it and where that code will live... Here are my initial thoughts. Let me know if I get anything wrong..."
This puts it in a conversation mode and gets you on the same page, and best of all it's quick. The llm agent isn't searching files and building, it's just giving general opinions
Once I have a good idea of the architecture, I move on to "write this class. Here is an overview of how I would do it", or point the agent to an existing file with the right conventions in place. Once again, if you've done your conversation doesn't take long. 20-second turnaround.
Then it's time to refine. "Move this function here, refactor that"
If you're going to have to understand the code anyway, read it while your writing it. Line by line, function by function, class by class. "Growing" a feature this way keeps you and the llm in sync. It doesn't go off on tangents. And building incrementally keeps you in the loop and in your flow. You don't have 5 things going at once. You have 1, faster.
I've let agents do their thing for a dozen minutes and usually end up having to rewind the whole thing piece by piece to understand what's actually happening
If you're truly vibe coding, maybe you don't have to read the code and can have a dozen agents on a dozen projects. But if you're making Serious Software, I don't see how you can commit code unseen. And in that case, you're not losing anything working incrementally and reading code as it's written?
UltraSane|21 days ago
Ozzie_osman|21 days ago
4er_transform|21 days ago
If “I get exhausted that I have to check in on my coding agent while it does my job” isn’t weak, what is? This has to be satire.
93po|21 days ago
gaigalas|21 days ago
I'm fatigued by this myth.
taway1874|21 days ago
mrcwinn|21 days ago
ashitlerferad|19 days ago
dangus|21 days ago
And even today when it’s useful, it’s really most useful for very specific domains like coding.
It’s not been impressive at all with other applications. Just chat with your local AI chat bot when you call customer service.
For example, I watch a YouTube channel where this guy calls up car dealerships to negotiate car deals and some of them have purchased AI receptionist solutions. They’re essentially worse than a simple “press 1 for sales” menu and have essentially zero business value.
Another example, I switched to a cheap phone plan MVNO that uses AI chat as its first line of defense. All it did was act as a natural language search engine for a small selection of FAQ pages, and to actually do anything you needed to find the right button to get a human.
These two examples of technology were not worth the hype. We can blame those businesses all day long but at the end of the day I can’t imagine those businesses are going to be impressed with the results of the tech long term. Those car dealerships won’t sell more cars because of it, my phone plan won’t avoid customer service interactions because of it.
In theory, these AI systems should easily be able to be plugged in to do some basic operations that actually save these businesses from hiring people.
The cellular provider should be able to have the AI chatbot make real adjustments to your account, even if they’re minor.
The car dealership bot should be able to set the customer up in the CMS by collecting basic contact info, and maybe should be able to send a basic quote on a vehicle stock number before negotiations begin.
But in practice, these AI systems aren’t providing significant value to these businesses. Companies like Taco Bell can’t even replace humans taking food orders despite the language capabilities of AI.
Kiro|21 days ago
nitroedge|21 days ago
stephc_int13|21 days ago
beepbooptheory|21 days ago
I know more than most there is some baseline productivity we are always trying to be at, that can sometimes be a target more than a current state. But the way people talk about their AI workflows is different. It's like everyone has become tyranical factory floor managers, pushing ever further for productive gains.
Leave this kind of productivity to the bosses I say! Life is a broader surface than this. We can/should focus on be productive together, but leave your actual life for finer, more sustainable ventures.
aucisson_masque|21 days ago
> When each task takes less time, you don't do fewer tasks. You do more tasks.
And you're also paid more. Find a job that ask less from you if you are fatigued, not everyone want to sacrifice his personal life for his career. That's choices you got to make but ai doesn't inherently force you to become overworked.
taway1874|21 days ago
Unfortunately, with these types of software simpleton's making decisions we are going to see way more push for AI usage and thus higher productivity expectations. They cannot wrap their heads around the fact (for starters) that AI is not deterministic so that increases the overhead on testing, security, requirements, integrations etc. making all those productivity gains evaporate. Worse (like the author mentioned), it makes your engineer less creative and more burnt-out.
Let's be honest here. Engineers picked this career broadly for 2 reasons, creativity and money. With AI, the creativity aspect is taken away and you are now more of a tester. As for money, those same dumbass decision makers are now going to view this skillset as a commodity and find people who can easily be trained in to "AI Engineers" for way less money to feed inputs.
I am all for technological evolution and welcome it but this isn't anything like that. It is purely based on profits, shareholders and any but building good, proper software systems. Quality be damned. Profession of Software Development be damned. We will regret it in the future.
rizs12|21 days ago
nextlevelwizard|21 days ago
tartoran|21 days ago
amichayg|21 days ago
Razengan|21 days ago
bicx|21 days ago
giancarlostoro|21 days ago
stuartjohnson12|21 days ago
It is getting very hard to continue viewing HN as a place where I want to come and read content others have written when blog posts written largely with ChatGPT are constantly upvoted to the top.
It's not the co-writing process I have a problem with, it's that ChatGPT can turn a shower thought into a 10 minute essay. This whole post could have been four paragraphs. The introduction was clearly written by an intelligent and skilled human, and then by the second half there's "it's not X, it's Y" reframe slop every second sentence.
The writing is too good to be entirely LLM generated, but the prose is awful enough that I'm confident this was a "paste outline into chatgpt and it generates an essay" workflow.
Frustrating world. I'm lambasting OP, but I want him to write, but actually, and not via a lens that turns every cool thought into marketing sludge.
falloutx|21 days ago
babarock|21 days ago
Code and feature still need to experience time and stability in order to achieve maturity. We need to give our end users time to try stuff, to shape their opinions and habits. We need to let everyone on the dev team take the time to update their mental model of the project as patches are merged. Heck, I've seen too many Product Owners incapable of telling you clearly what went in and out of the code over the previous 2 releases, and those are usually a few weeks apart.
Making individual tasks faster should give us more time to think in terms of quality and stability. Instead, people want to add more features more often.
0xbadc0de5|21 days ago
ThomasServo|21 days ago
I write software professionally and remotely for large boring insurance company, but I'm building a side project of an area of interest using AI tools to assist, and I've created in a couple months a few hours per week what would have taken me a year or more to create. I've read other's comments about having to babysit the AI tools, but that's not so bad.
The little benefit I've noticed using AI tools to "vibecode" is sometimes they come back with solutions that I never would have come up with. ...and then there's the solutions where I click the Undo button and shake my head.
shawnz|21 days ago
jama211|21 days ago
krupan|21 days ago
Dude! You don't have to use it!! Just write code yourself. Do a web search if you are stuck, the information is still out there on stack overflow and reddit. Maybe us kagi instead of Google, but the old ways still work really well.
nubg|21 days ago
Use your own words!
I'd rather read the prompt!
wesm|21 days ago
nonethewiser|21 days ago
Do you find it works well?
With these agents I've found that making the workflows more complicated has severe diminishing returns. And is outright worse in a lot of cases.
The real productivity boost I've found is giving it useful tools.
mungoman2|21 days ago
zagfh|21 days ago
They you have to deal with slop, slopfluencer articles written under the influence of AI psychosis, AI addicts, lying managers, lying CEOs etc.
And you usually, the author of this article being an exception, get dumber and are only able to verbalize AI boosterism.
AI only works if you become a slopfluencer, sell a course on YouTube and have people "like and subscribe".
Lockal|21 days ago
What about article? <Well known fact> + "nobody talks about" is fire title, really. People will vote for it even without reading the article.
Also: short sentences. For emphasis. Always three.
Feel the same pain as me? Find me on X or LinkedIn, or join the discussion on Hacker News.
Salgat|21 days ago
scotty79|21 days ago
1970-01-01|21 days ago
Just a few days ago: https://news.ycombinator.com/item?id=46885530
rsrsrs86|21 days ago
66yatman|21 days ago
osigurdson|21 days ago
66yatman|21 days ago
rw_panic0_0|21 days ago
CuriouslyC|21 days ago
coldtea|21 days ago
PaulHoule|21 days ago
quirkot|21 days ago
sph|21 days ago
WadeGrimridge|21 days ago
keybored|21 days ago
> I shipped more code last quarter than any quarter in my career. I also felt more drained than any quarter in my career. These two facts are not unrelated.
I’m gonna be generous (and try not to be pedantic) and assume that more-code means more bugfixes and features (and whatnot) and not more LOC.
Your manager has mandated X tokens a day or you feel you have to use it to keep up. Huh?
> I build AI agent infrastructure for a living. I'm one of the core maintainers of OpenFGA (CNCF Incubating), I built agentic-authz for agent authorization, I built Distill for context deduplication, I shipped MCP servers. I'm not someone who dabbles with AI on the side. I'm deep in it. I build the tools that other engineers use to make AI agents work in production.
Oh.
> If you're an engineer who uses AI daily - for design reviews, code generation, debugging, documentation, architecture decisions - and you've noticed that you're somehow more tired than before AI existed, this post is for you. You're not imagining it. You're not weak. You're experiencing something real that the industry is aggressively pretending doesn't exist. And if someone who builds agent infrastructure full-time can burn out on AI, it can happen to anyone.
This is what ChatGPT writes to me when I ask “but why is that the case”.
1. No, you are not wrong
2. You don’t have <bad character trait>
3. You are experiencing something real
> I want to talk about it honestly. Not the "AI is amazing and here's my workflow" version. The real version.
And it will be unfiltered. Raw. And we will conclude with how to go on with our Flintstone Engineering[2] but with some platitudes about self-care.
> The real skill ... It's knowing when to stop.
Stop prompting? Like, for
> Knowing when the AI output is good enough.
Ah. We do short prompting sessions instead.
> Knowing that your brain is a finite resource and that protecting it is not laziness - it's engineering.
Indeed it’s not this thing. It’s that—thing.
> AI is the most powerful tool I've ever used. It's also the most draining. Both things are true. The engineers who thrive in this era won't be the ones who use AI the most. They'll be the ones who use it the most wisely.
Of course we will keep using “the most powerful tool I’ve ever used”. But we will do it wisely.
What’s to worry about? You can use ChatGPT as your therapist now.
[1] https://news.ycombinator.com/item?id=46935607
[2] https://news.ycombinator.com/item?id=44163821
fHr|21 days ago
unknown|21 days ago
[deleted]