I am becoming more and more convinced that AI cant be used to make something better than what could have built before AI.
You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
I’m finding that the code LLMs produce is just average. Not great, not terrible. Which makes sense, the model is basically a complex representation of the average of its training data right? If I want what I consider ‘good code’ I have to steer it.
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
And Unix was mainly made by two people, it's astounding that as I get older, even tech managers don't know "the mythical man month", and how software production generally scales.
, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product.
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
“You never needed 1000s of engineers to build software anyway”
What is the point of even mentioning this? We live in reality. In reality, there are countless companies with thousands of engineers making each piece of software. Outside of reality, yes you can talk about a million hypothetical situations. Cherry picking rare examples like Winamp does nothing but provide an example of an exception, which yes, also exists in the real world.
You need 1000 engineers because you have poor engineering leadership, or no engineering leadership, and engineering is a black hole that management shovels money into where it falls directly onto a huge plane of middle managers who do the best they can with their limited power and understanding. Meanwhile your sales team is writing specifications for the next version of the product, which they already promised to customers, and they hired an outside consultant to transform it into 500 spec documents written in damn near legalese, which will appear one day on the lead engineer's desk with no foreshadowing. It turns out that throwing more engineers at the problem helps here because you'll run out of tasks to assign to all of them and some will roam the halls and accidentally connect distributed knowledge back together.
Completely agree. There is a common misunderstanding/misconception in product development, that more features = better product.
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
What youre pointing at is the trade off between concentration of understanding vs fragmented understanding across more people.
The former is always preferred in the context of product development but poses a key person risk. Apple in its current form is a representation of this - Steve did enough work to keep the company going for a decade after his death. Now its sort-of lost on where to go next. But on the flip side look at its market cap today vs 2000.
Wait, surely adding 10x more agents to my project will speed up development, improve the end product, and make me more productive by that same proportion, right?
I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?
Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.
I just build a programming language in couple of hours, complete with interpreter with claude code. I know nothing about designing and implementing programming languages: https://github.com/m-o/MoonShot.
Previously, I'd have an idea, sit on it for a while. In most cases, conclude it's not a good idea worth investing in. If I decided to invest, I'd think of a proper strategy to approach it.
With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
I still need to figure out how to deal with that, for now I just time box these sessions.
But I feel I'm trading thinking time for execution time, and understanding time for testing time. I'm not yet convinced I like those tradeoffs.
Edit: Just a clarification: I currently work in two modes, depending on the project. In some, I use agentic development. In most, I still do it "old school". That's what makes the side effects I'm noticing so surprising. Agentic development pulls me down rabbit holes and makes me loose the plot and focus. Traditional development doesn't, its side effects apparently keep me focused and in control.
That's weird, I'm the opposite. Previously I would start coding immediately, because writing the code helps me figure out the what and the how, and because I'd end up with modular/reusable bits that will be helpful later anyway.
Now I sit on an idea for a long time, writing documentation/specs/requirements because I know that the code generation side of things is automated and effortlessly follows from exhaustive requirements.
>With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
How much of this is because you don't trust the result?
I've found this same pattern in myself, and I think the lack of faith that the output is worth asking others to believe in is why it's a throwaway for me. Just yesterday someone mentioned a project underway in a meeting that I had ostensibly solved six months ago, but I didn't even demo it because I didn't have any real confidence in it.
I do find that's changing for myself. I actually did demo something last week that I 'orchestrated into existence' with these tools. In part because the goal of the demo was to share a vision of a target state rather than the product itself. But also because I'm much more confident in the output. In part because the tools are better, but also because I've started to take a more active role in understanding how it works.
Even if the LLMs come to a standstill in their ability to generate code, I think the practice of software development with them will continue to mature to a point where many (including myself) will start to have more confidence in the products.
If you do not know what you want to build, how to ask the AI what you want and are unable to tell what the correct requirements are; then it becomes a waste of time and money.
More importantly, As the problem becomes more complex, it then matters more if you know where the AI falls short.
Case study: Security researchers were having a great time finding vulnerabilities and security holes in Openclaw.
The Openclaw creators had a very limited background in security even when the AI entirely built Openclaw and the authors had to collaborate with the security experts to secure the whole project.
with agentic development, I've finally considered doing open source work for no reason aside from a utility existing
before, I would narrow things down to only the most potentially economically viable, and laugh at ideas guys that were married to the one single idea in their life as if it was their only chance, seemingly not realizing they were competing with people that get multiple ideas a day
back to the aforementioned epiphany, it reminds me of the world of Star Trek where everything was developed for its curiosity and utility instead of money
TBH, I have found AI addictive, you use it for the first time, and its incredible. You get a nice kick of dopamine. This kick of dopamine, is decreasing with every win you get. What once felt incredible, is just another prompt today.
Those things don't excite you any more.
Plus, the fact that you no longer exercise your brain at work any more.
Plus, the constant feeling of FOMO.
This is not a technology problem. AI intensifies work because management turns every efficiency gain into higher output quotas. The solution is labor organization, not better software.
Labor organization yes! I don't quite know how to achieve it. I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
On a separate note, I have the intensification problem in my personal work as well. I sit down to study, but, first, let me just ask Claude to do some research in the background... Oh, and how is my Cursor doing on the dashboard? Ah, right, studying... Oh, Claude is done...
Why does management turn efficiency gains into higher output quotas? Because competition forced them to. This is a feature of free market capitalism. A single participant can't decide to keep output as is when efficiency improves, because it will lose the competition to those that increase output. Labor organization could be the solution if it was global. Labor organizations that are based in a single country will just lead to work moving to countries without labor organization.
This problem of efficiency gains never translating to more free time is a problem deep in our economic system. If we want a fix, we need to change the whole system.
The driving force is not management or even developers, it's always the end users. They get to do more with less, thanks to the growing output. This is something to the celebrated, not a problem to be "solved" with artificial quotas.
This argument has been used against every new technology since forever.
And the initial gut reaction is to resist by organizing labor.
Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
This is actually a really good point that I have kind of noticed when using AI for side project, so being on my own time. The allure of thinking "Oh I wonder how it will perform with this feature request if I give it this amount of info".
Can't say I would put off sleep for it but I get the sentiment for sure.
I'm also coming to the conclusion that LLMs have basically the same value as when I tried them out with GPT-3 : good for semantic search / debugging. Bad for generation as you constantly have to check it, correct it, and the parts you trust it to get "right" are often those that are biting you afterwards - or if right introduce gaps in your own knowledge that make you slowly inefficient in your "generation controller" role.
Yes, and I’m convinced AI companies either pay or brainwash these people to put out blog posts like this to spread the idea that it actually works. It doesn’t.
Gotten XFCE to run with Orca and AT-SPI communicating to make the desktop environment accessible.
None of this would have happened without AI. Of course, it's only useful for a few people that are blind, use Android, and love Linux and Emacs and such. But it's improved my life a ton. I can do actual work on my phone. I've got Org-mode, calendar, Org-journal, desktop chromium, ETC. all on my phone. And if AI dies tomorrow, I'll still have it. The code is all there for me to learn from, tweak, and update.
I just use one agent, Codex. I don't do the agent swarms yet.
When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.
When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.
40 years ago when I was a history major in college one of my brilliant professors gave us a book to read called "the myth of domesticity".
In the book The researcher explains that when washing machines were invented the women faced a whole new expectation of clean clothes all the time because washing clothes was much less of a labor. And statistics pointed out that women actually were washing clothes more often than doing more work after the washing machine was invented then before.
This happens with any technology. AI is no different.
As someone who prefers to do one task at a time, using AI tools makes me feel productive and unproductive at the same time: productive because I am able to do finish my task faster, unproductive because I feel like I am wasting my time while I am waiting for the AI to respond.
1. New productivity enhancer comes out.
2. Everyone thinks, "This is it! Work will get shorter|easier as a result of this!"
3. Instead the work gets faster|more|higher quality and the (time) volume stays roughly the same.
This applies across (almost?) everything: club newsletters that used to take half a day to write out by hand later took half a day to type and lay out with some clip art. It still takes a year or more to make a movie, but now we get Avatar instead of Ice Pirates -- look it up! :-)
When work reduction does happen, it generally happens at a person-quantum level: people hang on to their role until it goes away entirely, and they (hopefully) find something else to do. Rarely is it the case that someone legitimately reduces to part time incrementally as productivity increases.
AI is similar: no doubt jobs will go away as a result, but much less often will AI result in jobs that are easy/part time. For anyone still in the "work" game, even with AI there's still 5x8 hours of work to do.
I would learn more about air combat by listening to a 12 minutes conversation with a jet fighter pilot, than I will from 3-day seminar by air force journalists.
Tell me about what’s the LLM impact on your work, on account your work is not wiring about AI.
Or if one wish for a more explicit noise filter: Don’t tell me what AI can do. Show me what you shipped with it that isn’t about AI.
Computer languages were the lathe for shaping the machines to make them do whatever we want, AI is a CNC. Another abstraction layer for making machines do whatever we want them to do.
Managers don’t even need to push anything. FOMO does all the work.
Overheard a couple of conversations in the office how one IC spent all weekend setting up OpenClaw, another was vibe coding some bullshit application.
I see hundreds of crazy people in our company Slack just posting/reposting twitter hype threads and coming up with ridiculous ideas how to “optimize” workflow with AI.
Once this becomes the baseline, you’ll be seen as the slow one, because you’re not doing 5x work for the same pay.
You do it to yourself, you do, and that's why it really hurts.
> Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
I like working on my own projects, and where I found AI really shone was by having something there to bounce ideas off and get feedback.
That changes if you get it to write code for you. I tried vibe-coding an entire project once, and while I ended up with a pretty result that got some traction on Reddit, I didn't get any sense of accomplishment at all. It's kinda like doomscrolling in a way, it's hard to stop but it leaves you feeling empty.
People are a gas, and they expand to fill the space they're in. Tools that produce more work do make people's lives easier, they mean an individual just needs to do more work using their tools to do so. This is a disposition that most people have, and therefore it's unavoidable. AI is not exciting to me. I only need to use it so I don't fall behind my peers. Why would I ever be interested in that?
[+] [-] falloutx|1 month ago|reply
You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
[+] [-] cedws|1 month ago|reply
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
[+] [-] FieryTransition|1 month ago|reply
[+] [-] jasode|1 month ago|reply
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
[+] [-] prng2021|1 month ago|reply
What is the point of even mentioning this? We live in reality. In reality, there are countless companies with thousands of engineers making each piece of software. Outside of reality, yes you can talk about a million hypothetical situations. Cherry picking rare examples like Winamp does nothing but provide an example of an exception, which yes, also exists in the real world.
[+] [-] level09|1 month ago|reply
for me AI has been less about building more/fast, and more about unlocking potential that was always out of reach.
Knowledge gaps that would've taken years to fill, new angles I wouldn't have thought to explore on my own. It's not that it makes more software.
it just makes you more capable of tackling things you couldn't before.
[+] [-] phendrenad2|1 month ago|reply
[+] [-] pousada|1 month ago|reply
So everything stays exactly the same?
[+] [-] zsoltkacsandi|1 month ago|reply
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
[+] [-] sdf2erf|1 month ago|reply
The former is always preferred in the context of product development but poses a key person risk. Apple in its current form is a representation of this - Steve did enough work to keep the company going for a decade after his death. Now its sort-of lost on where to go next. But on the flip side look at its market cap today vs 2000.
[+] [-] imiric|1 month ago|reply
I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?
Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.
/s
[+] [-] kvgr|1 month ago|reply
[+] [-] fhd2|1 month ago|reply
Previously, I'd have an idea, sit on it for a while. In most cases, conclude it's not a good idea worth investing in. If I decided to invest, I'd think of a proper strategy to approach it.
With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
I still need to figure out how to deal with that, for now I just time box these sessions.
But I feel I'm trading thinking time for execution time, and understanding time for testing time. I'm not yet convinced I like those tradeoffs.
Edit: Just a clarification: I currently work in two modes, depending on the project. In some, I use agentic development. In most, I still do it "old school". That's what makes the side effects I'm noticing so surprising. Agentic development pulls me down rabbit holes and makes me loose the plot and focus. Traditional development doesn't, its side effects apparently keep me focused and in control.
[+] [-] energy123|1 month ago|reply
Now I sit on an idea for a long time, writing documentation/specs/requirements because I know that the code generation side of things is automated and effortlessly follows from exhaustive requirements.
[+] [-] jcims|1 month ago|reply
How much of this is because you don't trust the result?
I've found this same pattern in myself, and I think the lack of faith that the output is worth asking others to believe in is why it's a throwaway for me. Just yesterday someone mentioned a project underway in a meeting that I had ostensibly solved six months ago, but I didn't even demo it because I didn't have any real confidence in it.
I do find that's changing for myself. I actually did demo something last week that I 'orchestrated into existence' with these tools. In part because the goal of the demo was to share a vision of a target state rather than the product itself. But also because I'm much more confident in the output. In part because the tools are better, but also because I've started to take a more active role in understanding how it works.
Even if the LLMs come to a standstill in their ability to generate code, I think the practice of software development with them will continue to mature to a point where many (including myself) will start to have more confidence in the products.
[+] [-] rvz|1 month ago|reply
More importantly, As the problem becomes more complex, it then matters more if you know where the AI falls short.
Case study: Security researchers were having a great time finding vulnerabilities and security holes in Openclaw.
The Openclaw creators had a very limited background in security even when the AI entirely built Openclaw and the authors had to collaborate with the security experts to secure the whole project.
[+] [-] lelanthran|1 month ago|reply
My experience with LLMs is that they will call any idea a good idea, one feasible enough to pursue!
Their training to be a people-pleaser overrides almost everything else.
[+] [-] darkwater|1 month ago|reply
> With agentic development, I have an idea, waste a few hours chasing it,
What's the difference between these 2 periods? Weren't you wasting time when sitting on it and thinking about your idea?
[+] [-] yieldcrv|1 month ago|reply
before, I would narrow things down to only the most potentially economically viable, and laugh at ideas guys that were married to the one single idea in their life as if it was their only chance, seemingly not realizing they were competing with people that get multiple ideas a day
back to the aforementioned epiphany, it reminds me of the world of Star Trek where everything was developed for its curiosity and utility instead of money
[+] [-] unknown|1 month ago|reply
[deleted]
[+] [-] localhoster|1 month ago|reply
Those things don't excite you any more. Plus, the fact that you no longer exercise your brain at work any more. Plus, the constant feeling of FOMO.
It deflates you, faster.
[+] [-] singularfutur|1 month ago|reply
[+] [-] bckr|1 month ago|reply
On a separate note, I have the intensification problem in my personal work as well. I sit down to study, but, first, let me just ask Claude to do some research in the background... Oh, and how is my Cursor doing on the dashboard? Ah, right, studying... Oh, Claude is done...
[+] [-] TFYS|1 month ago|reply
This problem of efficiency gains never translating to more free time is a problem deep in our economic system. If we want a fix, we need to change the whole system.
[+] [-] zozbot234|1 month ago|reply
[+] [-] sawmurai|1 month ago|reply
[+] [-] MrBuddyCasino|1 month ago|reply
"This time, its going to be the correct version of socialism."
[+] [-] ap99|1 month ago|reply
And the initial gut reaction is to resist by organizing labor.
Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.
Repeat.
[+] [-] bilekas|1 month ago|reply
This is actually a really good point that I have kind of noticed when using AI for side project, so being on my own time. The allure of thinking "Oh I wonder how it will perform with this feature request if I give it this amount of info".
Can't say I would put off sleep for it but I get the sentiment for sure.
[+] [-] mentalgear|1 month ago|reply
[+] [-] aetherspawn|1 month ago|reply
[+] [-] co_king_3|1 month ago|reply
I think these companies have been manipulating social media sentiment for years in order to cover up their bunk product.
[+] [-] teekert|1 month ago|reply
Long story short, it was ugly and didn't really work as I wanted. So I'm learning Hugo myself now... The whole experience was kind of frustrating tbh.
When I finally settled in en did some hours of manual work I felt much better because of it. I did benefit from my planning with Claude though...
[+] [-] devinprater|1 month ago|reply
* Made Termux accessible enough for me to use.
* Made an MUD client for Emacs.
Gotten Emacs and Emacspeak working on Termux.
Gotten XFCE to run with Orca and AT-SPI communicating to make the desktop environment accessible.
None of this would have happened without AI. Of course, it's only useful for a few people that are blind, use Android, and love Linux and Emacs and such. But it's improved my life a ton. I can do actual work on my phone. I've got Org-mode, calendar, Org-journal, desktop chromium, ETC. all on my phone. And if AI dies tomorrow, I'll still have it. The code is all there for me to learn from, tweak, and update.
I just use one agent, Codex. I don't do the agent swarms yet.
[+] [-] digikata|1 month ago|reply
When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.
When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.
[+] [-] rvz|1 month ago|reply
Also this post should link to the original source as well.
As per the submission guidelines [1]:
”Please submit the original source. If a post reports on something found on another site, submit the latter.”
[0] https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...
[1] https://news.ycombinator.com/newsguidelines.html
[+] [-] Noaidi|1 month ago|reply
In the book The researcher explains that when washing machines were invented the women faced a whole new expectation of clean clothes all the time because washing clothes was much less of a labor. And statistics pointed out that women actually were washing clothes more often than doing more work after the washing machine was invented then before.
This happens with any technology. AI is no different.
[+] [-] sprightlytogo|1 month ago|reply
[+] [-] gcanyon|1 month ago|reply
When work reduction does happen, it generally happens at a person-quantum level: people hang on to their role until it goes away entirely, and they (hopefully) find something else to do. Rarely is it the case that someone legitimately reduces to part time incrementally as productivity increases.
AI is similar: no doubt jobs will go away as a result, but much less often will AI result in jobs that are easy/part time. For anyone still in the "work" game, even with AI there's still 5x8 hours of work to do.
[+] [-] tzury|1 month ago|reply
Tell me about what’s the LLM impact on your work, on account your work is not wiring about AI.
Or if one wish for a more explicit noise filter: Don’t tell me what AI can do. Show me what you shipped with it that isn’t about AI.
[+] [-] mrtksn|1 month ago|reply
[+] [-] maininformer|1 month ago|reply
I prompt and sit there. Scrolling makes it worse. It's a good mental practice to just stay calm and watch the AI work.
[+] [-] dubeye|1 month ago|reply
[+] [-] erelong|1 month ago|reply
[+] [-] wiseowise|1 month ago|reply
Overheard a couple of conversations in the office how one IC spent all weekend setting up OpenClaw, another was vibe coding some bullshit application.
I see hundreds of crazy people in our company Slack just posting/reposting twitter hype threads and coming up with ridiculous ideas how to “optimize” workflow with AI.
Once this becomes the baseline, you’ll be seen as the slow one, because you’re not doing 5x work for the same pay.
[+] [-] Pingk|1 month ago|reply
> Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
[+] [-] Xarovin|1 month ago|reply
That changes if you get it to write code for you. I tried vibe-coding an entire project once, and while I ended up with a pretty result that got some traction on Reddit, I didn't get any sense of accomplishment at all. It's kinda like doomscrolling in a way, it's hard to stop but it leaves you feeling empty.
[+] [-] everdrive|1 month ago|reply
[+] [-] latexr|1 month ago|reply
[+] [-] graemep|1 month ago|reply