top | item 47055979

AI adoption and Solow's productivity paradox

792 points| virgildotcodes | 12 days ago |fortune.com

752 comments

order

Some comments were deferred for faster rendering.

crazygringo|12 days ago

Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].

Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.

The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.

And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.

[1] https://en.wikipedia.org/wiki/Productivity_paradox

kace91|12 days ago

The comparison seems flawed in terms of cost.

A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer.

Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there.

whynotminot|12 days ago

It’s also pretty wild to me how people still don’t really even know how to use it.

On hacker news, a very tech literate place, I see people thinking modern AI models can’t generate working code.

The other day in real life I was talking to a friend of mine about ChatGPT. They didn’t know you needed to turn on “thinking” to get higher quality results. This is a technical person who has worked at Amazon.

You can’t expect revolutionary impact while people are still learning how to even use the thing. We’re so early.

kamaal|12 days ago

One part of the system moving fast doesn't change the speed of the system all that much.

The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work.

If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights.

__jf__|12 days ago

Paul Strassmann wrote a book in 1990 called "Business Value of Computers" that showed that it matters where money on computers is spent. Only firms that spent it on their core business processes showed increased revenues whereas the ones that spent it on peripheral business processes didn't.

tabs_or_spaces|12 days ago

My experience has been

* If I don't know how to do something, llms can get me started really fast. Basically it distills the time taken to research something to a small amount.

* if I know something well, I find myself trying to guide the llm to make the best decisions. I haven't reached the state of completely letting go and trusting the llm yet, because the llm doesn't make good long term decisions

* when working alone, I see the biggest productivity boost in ai and where I can get things done.

* when working in a team, llms are not useful at all and can sometimes be a bottleneck. Not everyone uses llms the same, sharing context as a team is way harder than it should be. People don't want to collaborate. People can't communicate properly.

* so for me, solo engineers or really small teams benefit the most from llms. Larger teams and organizations will struggle because there's simply too much human overheads to overcome. This is currently matching what I'm seeing in posts these days

TimByte|12 days ago

I suspect the real breakthrough for teams won't be better raw models, but better ways to make the "AI-assisted thinking" legible and shareable across the group, instead of trapped in personal prompt histories

aurareturn|12 days ago

The future of work is fewer human team members and way more AI assistants.

I think companies will need fewer engineers but there will be more companies.

Now: 100 companies who employ 1,000 engineers each

What we are transitioning to: 1000 companies who employ 10 engineers each

What will happen in the future: 10,000 companies who employ 1 engineer each

Same number of engineers.

We are about to enter an era of explosive software production, not from big tech but from small companies. I don't think this will only apply to the software industry. I expect this to apply to every industry.

stephenr|11 days ago

> llms can get me started really fast. Basically it distills the time taken to research something

> the llm doesn't make good long term decisions

What could possibly go wrong, using something you know makes bad decisions, as the basis of your learning something new.

It's like if a dietician instructed a client to go watch McDonald's staff, when they ask how to cook the type of meals that have been recommended.

nutjob2|11 days ago

To me the biggest benefit of LLMs has always been as a learning tol, be it for general queries or "build this so I can get an idea of how it works and get started quickly". There are so many little things that you need to know when trying anything new.

Herring|12 days ago

My compsci brain suggests large orgs are a distributed system running on faulty hardware (humans) with high network latency (communication). The individual people (CPUs) are plenty fast, we just waste time in meetings, or waiting for approval, or a lot of tasks can't be parallelized, etc. Before upgrading, you need to know if you're I/O Bound vs CPU Bound.

al_borland|12 days ago

When my company first started pushing for devs to use AI, the most senior guy on my team was pretty vocal about coding not being the bottleneck that slowed down work. It was an I/O issue, and maybe a caching issue as well from too many projects going at the same time with no focus… which also makes the I/O issues worse.

kjellsbells|12 days ago

Maybe experienced people are the L2 cache? And the challenge is to keep the cache fresh and not too deep. You want institutional memory available quickly (cache hit) to help with whatever your CPU people need at that instant. If you don´t have a cache, you can still solve the problem, but oof, is it gonna take you a long time. OTOH, if you get bad data in the cache, that is not good, as everyone is going be picking that out of the cache instead of really figuring out what to do.

notepad0x90|12 days ago

In my opinion, you're very wrong. There is typically lots of good communication -- one way. The stuff that doesn't get communicated down to worker bees is intentional. "CPUs" aren't all that fast either, unless you make them by providing incentives. if you're a well paid worker who likes their job, i can see why you would think that, but most people aren't that.

Meetings are work, as much as IPC and network calls are work. Just because they're not fun, or what you like to do, it doesn't mean they're any less of a work.

I think you're analyzing things from a tactical perspective, without considering strategic considerations. For example, have you considered that it might not be desirable for CPUs to be just fast, or fast at all? is CISC faster than RISC? different architectural considerations based on different strategic goals right?

If you're an order picker at an amazon warehouse, raw speed is important. being able to execute a simpler and more fixed set of instructions (RISC), and at greater speed is more desirable. if you're an IT worker, less so. IT is generally a cost-center, except for companies that sell IT services or software. if you're in a cost center, then you exist for non-profit-related strategic reasons, such as to help the rest of the company work efficiently, be resilient, compete, be secure. Some people exist in case they're needed some day, others are needed critically but not frequently, yet others are needed frequently but not critically. being able to execute complex and critical tasks reliably and in short order is more desirable for some workers. Being fast in a human context also means being easily bored, or it could mean lots of bullshit work needs to be invented to keep the person busy and happy.

I'd suggest taking that compsci approach but considering not just the varying tasks and workloads, but also the diversity of goals and user cases of users (decision makers/managers in companies). There are deeper topics with regards or strategy and decision making surrounding the state machines of incentives and punishments, and decision maker organization (hierarchical, flat, hub-and-spoke,full-mesh,etc..).

amrocha|12 days ago

Then where are all the amazing open source programs written by individuals by themselves? Where are all the small businesses supposedly assisted by AI?

chrismarlow9|12 days ago

The slow part as a senior engineer has never been actually writing the code. It has been:

- reviews for code

- asking stakeholders opinions

- SDLC latency (things taking forever to test)

- tickets

- documentations/diagrams

- presentations

Many of these require review. The review hell doesn't magically stop at Open source projects. These things happen internally too.

bunderbunder|11 days ago

And, hear me out here - perhaps for the sake of morale it makes sense to leave a smidge of the part of the job that actually attracts people to this profession in the first place on their plates. Otherwise we may find that, after the novelty wears off, we’re left with a net productivity dropoff because there’s not as much left to keep people motivated to do à good job of the remaining work.

yubblegum|11 days ago

I've sat in a room with a too big to tail banker's VP happily telling me and my boss that "we're getting rid of this whole floor".

Dateline ~2010. Location: NYC Why:Indian outsourced shops.

Now the zinger, dear hn, is this: He actually said to us (we ran a more boutique consulting firm) that "everything has to be done 3 times" and "their work is crap". But "we're getting rid of this floor".

That, imho, was due to geopolitical machinations of inducing India to become part of the West. The immediate equation of "money for quality work" wasn't working but the 'our higher ups' had more grand plans and sacrificing and gutting the IT industry in US was not a problem.

So, given the incentives these days, do not remotely pin your hopes on what these CEOs are saying. It means nothing whatsoever.

franktankbank|11 days ago

And India buys russian oil so what exactly is the goal there.

bubblewand|12 days ago

My company’s behind the curve, just got nudged today that I should make sure my AI use numbers aren’t low enough to stand out or I may have a bad time. Reckon we’re minimum six months from “oh whoops that was a waste of money”, maybe even a year. (Unless the AI market very publicly crashes first)

bazmattaz|12 days ago

My manager mentioned that his manager (an executive) is not happy because the org we are in are not using as much tokens as other orgs in the company. Pretty wild

mr_toad|12 days ago

So management basically have no clue and want you to figure out how to use AI?

Do they also make you write your own performance review and set your own objectives?

sebmellen|12 days ago

The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.

Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.

Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.

What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.

But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.

LPisGood|12 days ago

I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?

If you weren’t doing much of that before, I struggled to think of how you were doing much engineering at all, save some more niche extremely technical roles where many of those questions were already answered, but even still, I should expect you’re having those kinds of discussions, just more efficiently and with other engineers.

lich_king|12 days ago

> making slides/decks to communicate those thoughts,

That use case is definitely delegated to LLMs by many people. That said, I don't think it translates into linear productivity gains. Most white collar work isn't so fast-paced that if you save an hour making slides, you're going to reap some big productivity benefit. What are you going to do, make five more decks about the same thing? Respond to every email twice? Or just pat yourself on the back and browse Reddit for a while?

It doesn't help that these LLM-generated slides probably contain inaccuracies or other weirdness that someone else will need to fix down the line, so your gains are another person's loss.

jurschreuder|12 days ago

Workers may see the LLM as a productivity boost because they can basically cheat a their homework.

As a CEO I see it as a massive clog up of vast amounts of content that somebody will need to check. A DDoS of any text-based system.

The other day I got a document of 155 pages in Whatsapp. Thanx. Same with pull requests. Who will check all this?

beart|12 days ago

> Who will check all this?

The answer to that, for some, is more AI.

I had a peer explain that the PRs created by AI are now too large and difficult to understand. They were concerned that bugs would crop up after merging the code. Their solution, was to use another AI to review the code... However, this did not solve the problem of not knowing what the code does. They had a solution for that as well... ask AI to prepare a quiz and then deliver it to the engineer to check their understanding of the code.

The question was asked - does using AI mean best-practices should no longer be followed? There were some in the conversation who answered, "probably yes".

> Who will check all this?

So yeah, I think the real answer to that is... no one.

thisoneisreal|12 days ago

Just yesterday one of my junior devs got an 800-line code review from an AI agent. It wasn't all bad, but is this kid literally going to have to read an essay every time he submits code?

franktankbank|11 days ago

Who gave you the 155 page doc? How quickly were they fired?

K0balt|11 days ago

This is because the vast majority of white collar activity in a large corporation produces no direct economic value.

Making it easier/better just means more/higher quality “worthless” work is performed. The incentives in the not-directly -productive parts of organizations are to keep busy and maintain a stream of signals of productivity. For this , AI just raises the bar. The 25% of the work that -is- important to producing economic value just gets reduced to 15%.

The workforce in large orgs that is most AI adjacent is already idling along in terms of production of direct economic value. Making them 10x more productive in nonproductive work will not impact critical metrics in a short timeframe.

It’s worth noting that these “not directly productive” activities actually can (and often do) produce value, eventually. Things like brand identity, culture, and meta-innovation, vision (search-space) are intangibles that present as cost centers but can prove invaluable in longer timescales if done right.

gjk3|11 days ago

Principal-agent problem.

The manager wants a large team. The shareholder who ultimately employs the manager but does not control operations does not want that of course.

Hmm.

ddtaylor|11 days ago

There are a lot of people who sit with their laptop open while streaming something, sleeping or messing with their phone while periodically waking up to join a new meeting or fiddle with something to make it look like they are active.

These are the people "shocked" when they are displaced.

n_u|12 days ago

Original paper https://www.nber.org/system/files/working_papers/w34836/w348...

Figure A6 on page 45: Current and expected AI adoption by industry

Figure A11 on page 51: Realised and expected impacts of AI on employment by industry

Figure A12 on page 52: Realised and expected impacts of AI on productivity by industry

These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)

A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.

CSSer|11 days ago

Figure right after A6 is pretty striking. Ask people if they expect to use AI and a vast majority say yes. Ask if they expect to use AI for specific applications and no more than a third say yes in any industry. That should be telling imo. What we have is a tool that looks impressive to any non-SME for a lot of applications. I would caution against the idea that benefits are obvious.

J_Shelby_J|12 days ago

It’s simple calculus for business leaders: admit they’re laying off workers because the fundamentals are bad and spook investors, admit they’re laying off workers because the economy is bad and anger the administration, or just say it’s AI making roles unnecessary and hope for the best.

Ithildin|11 days ago

If I do something faster by pairing with AI, why should my employer reap the benefit? Why would I pass the savings on to my employer?

Could it be that employers are not seeing the difference because most employees are doing something else with the time they've saved by using AI?

There's been massive wage stagnation, benefits are crap, they play games with PTO. Most people I talk to who use AI as a part of their workflow are taking advantage of something nice that has come their way for a change.

DaedalusII|12 days ago

If you include microsoft copilot trials in fortune 500s, absolutely. A lot of major listed companies are still oblivious to the functionality of AI, their senior management don't even use it out of laziness

bccdee|12 days ago

There's a lot of rote work in software development that's well-suited to LLM automation, but I think a lot of us overestimate the actual usefulness of a chatbot to the average white-collar worker. What's the point of making Copilot compose an email when your prompt would be longer than the email itself? You can tell ChatGPT to make you a slide deck, but slide decks are already super simple to make. You can use an LLM as a search engine, but we already have search engines. People sometimes talk about using a chatbot to brainstorm, but that seems redundant when you could simply think, free from the burden of explaining yourself to a chatbot.

LLMs are impressive and flexible tools, but people expect them to be transformative, and they're only transformative in narrow ways. The places they shine are quite low-level: transcription, translation, image recognition, search, solving clearly specified problems using well-known APIs, etc. There's value in these, but I'm not seeing the sort of universal accelerant that some people are anticipating.

bandrami|12 days ago

That's probably true for some, but I think a lot of big orgs are simply risk-averse and see AI in general as a giant risk that isn't even fully baked enough to quantify yet. The security and confidentiality issues alone will make Operations hesitant, and Legal probably has some questions about IP (both the risk of a model outputting patented or otherwise protected code, and the huge legal gray area that is the copyrightability of the output of an LLM).

Give it a year or two and let things settle down and (assuming the music is still playing at that time) you might see more dinosaurs start to wander this way.

jeron|12 days ago

it turns out it's really hard to get a man to fish with a pole when you don't teach them how to use the reel

crispyambulance|11 days ago

I accept that AI-mediated productivity might not be what we expect to be.

But really, are CEO's the best people to assess productivity? What do they _actually_ use to measure it? Annual reviews? GTFO. Perhaps more importantly, it's not like anything a C-level says can ever be taken at face value when it involved their own business.

NoLinkToMe|11 days ago

The latest company I worked in had your typical fee-earners and fee-burners categories of employees.

The fee-earners had KPIs tied to the sales pipeline, from leads to contracts to work completed on fixed contracts or hours billed on variable-rate contracts. It's relatively easy to measure improvements here. Though it's harder to distill the causes of that and tie it to LLMs.

The fee-burners like in IT, legal, compliance, marketing, finance, typically had KPIs tied to the department objectives. This stuff is a LOT more subjective and a lot more prone to manipulation (goodhart's law). But if you spend 60 hours a week on work in such a department, you tend to have a pretty good idea if things are speeding up or not at all. In a department I was involved in there was a lot of KYC that involved reviewing 300+ pages per case, we tracked case workload per person per day, as well as success rates (percentage of case reviews completed correctly), and could see meaningful changes one could attribute to LLM use.

Agreed though that I'm more interested in a few case studies in detail to understand how they actually measured productivity.

gjk3|11 days ago

Most CEOs of large firms arent all that involved in the details, so theres no way they can have a true and proper view of the day-to-day operations on the ground level.

Steve Jobs is the only CEO of a large firm that I can re-call that always remained intimately involved.

steveBK123|11 days ago

I think we are entering the phase where corporate is expecting more ROI than they are getting, but want to remain in the arms race.

The firmwide AI guru at my shop who sends out weekly usage metrics and release notes started mentioning cost only in the last few weeks. At first it was just about engaging with individual business heads on setting budgets / rules and slowing the cost growth rate.

A few weeks later and he is mentioning automated cost reporting, model downgrading and circuit breaking at a per-user level. The daily spend where you immediately get locked within 24 hours is pretty low.

dranudin|11 days ago

I noticed something similar at my work. The CEO is hyping AI, but at the same time free access to the big models was taken away and rate limits seem to be much tighter..

lukaslalinsky|12 days ago

There was a recent post where someone said AI allows them to start and finish projects. And I find that exactly true. AI agents are helpful for starting proof of concepts. And for doing finishing fixes to an established codebase. For a lot of the work in the middle, I can be still useful, but the developer is more important there.

cadamsdotcom|12 days ago

Many people are using AI as a slot machine, rerolling repeatedly until they get the result they want..

Once the tools help the AI to get feedback on what its first attempt got right and wrong, then we will see the benefits.

And the models people use en masse - eg. free tier ChatGPT - need to get to some threshold of capability where they’re able to do really well on the tasks they don’t do well enough on today.

There’s a tipping point there where models don’t create more work after they’re used for a task, but we aren’t there yet.

saezbaldo|11 days ago

One underexplored reason: companies can't give AI agents real authority. The moment an agent needs to do anything beyond summarizing text — update a CRM, transfer funds, modify infrastructure — the security question kills it. No one wants an agent that can take irreversible actions with no approval chain. Until the trust architecture problem is solved, AI stays in read-only mode for most enterprises.

plaidfuji|11 days ago

This is the biggest bottleneck. To realize the “replacement of white collar workers” fever dream, (which is, I still believe, technically feasible), you need the agent that replaces them to have all of the context they had. All of the emails, all of the Slacks, all of the meeting minutes, access to private corporate systems and files, etc. I can’t think of a single company that would want to turn all of that over to OpenAI.

vagrantstreet|12 days ago

Perhaps something went wrong along the career path of a developer? Personally during my education there is a severe lack of actual coding done mid lectures, especially any sort of showcase of tools that are available. We didn't even get taught how to use debuggers, I see late year students still struggle how to do basic navigation in a terminal.

And the biggest irony is that the "scariest" projects we had at our university ended up being maybe 500-1000 lines of code, things really must go back to hands on programming with real time feedback from a teacher. LLM's only output what you ask and won't really suggest concepts used by professionals unless you go out of your way to ask for it, it all seems like a vicious cycle even though meaningful code blocks can range along 5 to 100 lines which. When I use LLM's I just get information burn out trying to dig through all that info or code

littlecranky67|12 days ago

I'm saying it over and over, AI is not killing dev jobs, offshoring is. The AI hype happens to fall into the end of the pandemic, and lots of companies went to work-from-home and are now hiring cheaper devs around the world.

beloch|12 days ago

The article suggests that AI-related productivity gains could follow a J-curve. An initial decline, as initially happened with IT, followed by an exponential surge. They admit this is heavily dependent on the real value AI provides.

However, there's another factor. The J-curve for IT happened in a different era. No matter when you jumped on the bandwagon, things just kept getting faster, easier, and cheaper. Moore's law was relentless. The exponential growth phase of the J-curve for AI, if there is one, is going to be heavily damped by the enshitification phase of the winning AI companies. They are currently incurring massive debt in order to gain an edge on their competition. Whatever companies are left standing in a couple of years are going to have to raise the funds to service and pay back that debt. The investment required to compete in AI is so massive that cheaper competition may not arise, and a small number of (or single) winner could put anyone dependent on AI into a financial bind. Will growth really be exponential if this happens and the benefits aren't clearly worth it?

The best possible outcome may be for the bubble to pop, the current batch of AI companies to go bankrupt, and for AI capability to be built back better and cheaper as computation becomes cheaper.

robinwhg|12 days ago

But there already is cheaper competition? Open models may be behind, but only ~6 months for every new generation.

ahepp|12 days ago

I read an article in FT just a couple days ago claiming that increased productivity was becoming visible in economic data

> My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade.

good for 3 clicks: https://giftarticle.ft.com/giftarticle/actions/redeem/97861f...

RegW|11 days ago

Productivity is a moveable feast and tricky to compare with the past. The productivity business talk about is the ratio of cost to profit.

As tech become available to help reduce your costs and drive up your profit, the same tech also reduces your competitor's costs and perhaps lets more competitors into the market. This drives down your product prices and reduces your profit.

So you invest but see no increase in productivity, but if you don't do it - you're toast.

metalman|11 days ago

As a small bespoke manufactuter of things made out of metal, I have recently begun implimenting a policy of abandoning most online services, including banking, well almost as customers can still send me money online, but I have to go to a branch to see or get funds, except for monthly reports. It is awsome, the web brings me customers via 2 web sites, and searches useing AI, but the whole thing is asymetrical, as it has been more than a year since my last online purchase or filling out a form, aplication etc, all done on paper, in person, or I live without whatever it is. The result is a work environment that is focused on customers and production, and external obligations, requirements are litteral, as they must be managed efficiently in person and in such a way as to be finnished or stable, none of the death by 1000 emails brain rot. The mental state of haveing zero knowledge of what is happening on a millisecond by millisecond basis and letting everything go, and lo the world grinds on just fine without me, and I get a few things done. Mr Solow called it long ago, and my intuition has always been that the busy work was shit, and have now proven that in my one specific circumstance.

des429|11 days ago

This title is so click bait-y and misleading compared to what the actual article is about it's tough not to feel disappointed this is on the front page. @dang

dang|11 days ago

Belatedly fixed. Thanks!

p.s. @dang doesn't work reliably - hn@ycombinator.com is the way to get a message delivered

concats|12 days ago

If we assume people are somewhat rational (big ask I know), and the Efficient-market hypothesis, then we can estimate the value created by AI to be roughly equal to the revenue of these AI companies. That is: A professional who pays 20€/month likely believes that the AI product provides them with roughly 20€ each month in productivity gains, or else they wouldn't be paying, and similarly they would pay more for a bigger subscription if they thought there was more low hanging fruit available to grab.

Of course this doesn't take into account people who just pay to play around and learn, non professional use cases, or a few other things, but it's a rough ballpark estimate.

Assuming the above, current AI models would only increase the productivity for most workplaces by a relatively small amount, around 10-200 € per employee per month perhaps. Almost indistinguishable compared to salaries and other business expenses.

Ukv|11 days ago

> A professional who pays 20€/month likely believes that the AI product provides them with roughly 20€ each month in productivity gains, or else [...] they would pay more for a bigger subscription

Unless I'm misunderstanding, shouldn't someone rational want to pay where (value - cost) is highest, opposed to increasing cost to the point where it equals value (which has diminishing returns)?

A $40 subscription creating $1000 worth of value would be preferred over a $200 subscription creating $1100 of value, for instance, and both preferred over a $1200 subscription creating $1200 of value.

mark_l_watson|12 days ago

I think the best point made in this conversation is that AI is often enough used to do things quickly that have little value, or just waste people’s time.

I am glad to see articles like this that evaluate impact, but I wish the following would get more public interest:

With LLMs we are chasing sort-of linear growth in capability at exponential cost increases for power and compute.

Were you mad when the government bailed out mis-managed banks? The mother of all government bailouts might be using the US taxpayer to fund idiot companies like Anthropic and OpenAI that are spending $1000 in costs to earn $100.

I am starting to feel like the entire industry is lazy: we need fundamental new research in energy and compute efficient AI. I do love seeing non-LLM research efforts and more being done with much smaller task-focused models, but the overall approach we are taking in the USA is f$cking crazy. I fear we are going to lose big-time on this one.

SKILNER|11 days ago

If you've ever undertaken the task of documenting entire workflows, then you know that you quickly put up the white flag at the word "entire".

When you actually talk to people about what they do there are often many, many nuances, micro-events, micro-decisions and micro-actions in their work. This is why it can take days/weeks/months to completely train a new person for a job.

This level of detail is barely documented - anywhere. There is a huge amount of information buried in workflows that AI has barely had access to for training. A lot of this is more in the realm of world models, rather than LLMs.

So imagine trying to use AI to improve these workflows it knows so little about. Then imagine AI trying to reinvent them across an organization.

We find these use cases where AI provides great value - totally true - but these barely scratch the surface of what goes on.

EastLondonCoder|12 days ago

I think the deluge on projects on show HN points to something real, its possible today to ship projects as a one man shop that looks like something that just a year or so would have required a team.

Personally I have noticed strange effects, where I previously would have reached for a software package to make something or solve an issue, its now often faster for me to write a specific program just for my use case. Just this weekend I needed a reel with a specific look to post on instagram but instead of trying to use something like after effects, i could quickly cobble together a program that was using css transforms that outputted a series of images I could tie together with ffmpeg.

About a month ago I was unhappy with the commercial ticketing systems, they were both expensive and opaque so I made my own. Obviously for a case like that you need discipline and testing when you take peoples money, so there was a lot of focus on end to end testing.

I have a few more examples like this, but to make this work you need to approach using LLMs with a certain amount of rigour. The hardest part is to prevent drift in the model. There are a certain number things you can do to make the model grounded in reality.

When the tool doesn’t have a reproducer, it’ll happily invent a story and you’ll debug the story. If you ground the root cause in for example a test, the model can get context enough to actually solve the problem.

Another issue is that you need to read and understand code quickly, but its no real difference from working with other developers. When tests are passing I usually do a PR to myself and then review as I usually would do.

A prerequisite is that you need tight specs, but those can also be generated if you are experienced enough. You need enough domain intuition to know what ‘done’ means and what to measure.

Personally I think the bottleneck will go from trying to get into a flow state to write solutions to analyze the problem space and verification.

lm28469|12 days ago

> I think the deluge on projects on show HN points to something real, its possible today to ship projects as a one man shop that looks like something that just a year or so would have required a team.

Lots of these project have a lifespan of a week and will never ever be maintained. When you pour blood and sweat in a projet you get attached to it, when you vibe code it in an afternoon and it's not and instant hit you move on to the next one.

1broseidon|12 days ago

I think the 'AI productivity gap' is mostly a state management problem. Even with great models, you burn so much time just manually syncing context between different agents or chat sessions.

Until the handoff tax is lower than the cost of just doing it yourself, the ROI isn't going to be there for most engineering workflows.

flurdy|11 days ago

It's the same reason why I have, for more than a decade, been so frustrated with people refusing to consider proper pair programming and even mob programming, as they view the need to keep people busy churning lines of code individually as the most important part of the company.

That multiple AI agents can now churn out those lines relatively nearly instantly, and yet project velocity does not go much faster, should start to make people aware that code generation is not actually the crucial cost in time taken to deliver software and projects.

I ranted recently that small mob teams with AI agents may be my view of ideal team setup: https://blog.flurdy.com/2026/02/mob-together-when-ai-joins-t...

lonelyasacloud|11 days ago

> ... and yet project velocity does not go much faster

1) The models like us have finite context windows and intelligence, even with good engineering practices system complexity will eventually slow them down.

2) At the moment at least, the code still needs to be reviewed and signed off by us and reading someone else's code is usually harder than writing it.

alexwennerberg|12 days ago

Large firms are extremely bureaucratic organizations largely isolated from the market by their monopolistic positions. Internal pressures rule over external ones, and thus, inefficiency abounds. AI undeniably is a productive tool, but large companies aren't really primarily concerned with productivity.

vjk800|12 days ago

Indeed. Most large companies don't need AI to increase productivity - they just need to stop wasting time on stupid bullshit. However, figuring out what is stupid bullshit and what is not seems to be an impossible task, and I don't think AI is going to help here at all.

rybosworld|11 days ago

Companies over a certain size (say more than 25+ employees), are universally bad at:

- measuring productivity

- adapting to change

This article just reinforces that. Past a certain headcount, executives have little to no understanding of what IC day-to-day is like.

AI tooling doesn't fix the bureaucracy the c-suite helped to create.

zubiaur|11 days ago

And there are incentives to miss-report.

My team has gained a reputation of being some sort of firefighting crew.

We are being called by PMs when projects are failing, usually engineering-data and engineering-adjacent stuff. (Mechanical/Electrical).

We automate the heck out the processes, using a mix of AI processing, RAGs, and AI assisted coding.

We rescue the projects. Finish ahead of schedule. Make fewer mistakes. We gain additional scope. We win new projects. We bring new clients.

But when higher ups ask the people we helped about productivity gains, the most generous will say stuff like "it takes as long to review as it takes to do things manually", "They really helped on {inconsequential part of the deliverable}"

If the that is the takeaway these people were taking, they would incredibly misled. Luckily for me, I have people who deal with the politics, while my team can focus on delivery.

Our reputation keeps growing, and we keep delivering faster. The heads of the departments we work with love us, the middle rank who were doing the laborious crap, maybe not so much.

cmiles8|12 days ago

I like AI and use it daily, but this bubble can’t pop soon enough so we can all return to normally scheduled programming.

CEOs are now on the downside of the hype curve.

They went from “Get me some of that AI!” after first hearing about it, to “Why are we not seeing any savings? Shut this boondoggle down!” now that we’re a few years into bubble, the business math isn’t working, and they only see burning piles of cash.

sowbug|12 days ago

"return to normally scheduled programming" is probably not the exact phrasing you want to use. :)

bob1029|11 days ago

The analogy that an LLM is simply an amplifier is apt for most general business.

If you've already got a very effective team with clear vision/goals, this technology will almost certainly help to some degree.

If you've got a sinking ship of a business, this technology will likely drag you down faster.

You always have to work backward from the customer into the technology. AI will never change that. I've found myself waffling on advice to some clients regarding AI because whether or not they can effectively leverage it depends more on what the people in the business are willing to do than what the technology can do.

cowpig|11 days ago

NVIDIA is doing circular finance deals with all of the top labs to pump up demand for its products and charging a monopoly rate on those products. Everything in computing is costing more.

Access to capital for everyone else is dropping. And the US economy is being managed by chaos monkeys, causing all kinds of supply chain disruptions. Oligopolies in almost every market are increasingly jacking up prices above market equilibrium rates as they are emboldened by a corrupted FTC.

Despite what Peter Thiel may have led you to believe, Monopolies are not healthy for an economy in aggregate.

Of course the economy is slowing.

spants|11 days ago

Maybe the CEOs do not realise that their workers are achieving great productivity, completing their tasks in 1 hour instead of 8, and spending time on the beach, rather than at their desks?

tehjoker|12 days ago

There is probably a threshold effect above which the technology begins to be very useful for production (other than faking school assignments, one-off-scripts, spam, language translation, and political propaganda), but I guess we're not there yet. I'm not counting out the possibility of researchers finding a way to add long term memory or stronger reasoning abilities, which would change the game in a very disorienting way, but that would likely mean a change of architecture or a very capable hybrid tool.

DaedalusII|12 days ago

the greatest step change will be when mainstream business realise they can use AI to accurately fill in PDF documents with information in any format

filling in pdf documents is effectively the job of millions of people around the world

hestefisk|12 days ago

I am in strategy consulting and I can tell you the productivity gains are real in terms of research, model building, and summarising work. The result is price pressure from our clients.

pengaru|12 days ago

At $dayjob GenAI has been shoved into every workflow and it's a constant source of noise and irritation, slop galore. I'm so close to walking away from the industry to resume being a mechanic, what a complete shit show.

bitwize|12 days ago

Meanwhile in some auto shop,

"Perfect! Let's delve into the problem with the engine. Based on the symptoms you describe, the likely cause is a blown head gasket..."

matt3210|12 days ago

I’m not sure about this. I’ve been 100% ai since jan/1 and I’m way more productive at producing code.

The non code parts (about 90% of the work) is taking the same amount of time though.

vjk800|12 days ago

My experience has been that AI is much more useful on my own systems than on company systems. For AI to (currently) be useful, I need to choose my own tooling and LLM models to support AI centered workflow. At work, I have to use whatever (usually Microsoft) tools my company has chosen to purchase and approve for my corporate computer, and usually nothing works as well as on my own machine where I get to set it up as I want.

sigmoid10|12 days ago

What you are describing is a failure to integrate AI into said company systems. I have seen quite a few companies now that buy MS AI products with great hopes only to be severely disappointed, because they may as well have just used vanilla ChatGPT (in fact then they would at least get newer models faster). But there are counter examples too. If you can pull all your company documentation into a vector db and build a RAG based assistant, you can potentially save countless hours across your workforce and possibly customers too. But this is not easy and also requires some level of UI interactivity that noone really offers right now. In fact they can't offer it, because you usually need to integrate ancient, arcane sources into your system. So you do have to write a lot of integration code yourself at every step. Not many companies are willing to spend that kind of money and effort, because managers just want to buy a MS product and be done with improving efficiency by next quarter.

vagab0nd|10 days ago

The "productivity gain" in the world of bits has been pretty much exponential for 100 years. But as they spill over into the world of atoms, they flatten into step changes, because you lose the compounding effect.

I do think we are on the verge of something tho. Once the compounding effect happens in the world of atoms (recursive robotics), it's over.

carefree-bob|12 days ago

It's not just technology, it's very hard to detect the effect of inventions in general on productivity. There was a paper pointing out that the invention of the steam engine was basically invisible in the productivity statistics:

https://www.frbsf.org/wp-content/uploads/crafts.pdf

robinwhg|12 days ago

The first steam engine was invented by a Turk and he used it solely to make kebab spin. Never thought about using it for anything else.

insane_dreamer|11 days ago

It's not that AI is ineffective, but it will take time to create solutions that are actually highly useful in real-world business scenarios.

Quickly slapping "AI features" on a bunch of existing products -- like almost every SW company seems to have done in an effort to appear "on the cutting edge" -- accomplishes almost nothing.

deadbabe|12 days ago

The people who will be most productive with AI will be the entreprompteurs who whip up entire products and go to market faster than ever before, iterating at dangerous speeds. Lean Startup methodology on pure steroids basically.

Unfortunately I think most of the stuff they make will be shit, but they will build it very productively.

boxedemp|12 days ago

Software doesn't need to be good to be successful; it only needs to solve a problem and be better than the competition.

I predict a golden age for experienced developers! There will be an uncountable number of poorly designed apps with scaling issues. And many of them will be funded.

ssvt|9 days ago

Maybe I’m slow (alright, I know I am) but it seems to me that HN has jumped the shark with it’s apparent shift to “all AI, all the time” making me lose interest.

Yes, I won’t let the door hit me in the ass on the way out…

qgin|11 days ago

This may mean the centaur era will be shorter than expected. If we take as a given that:

* AI is doing real work

* Humans using AI don't seem to get more done with AI than without

There is a huge economic pressure to remove humans and just let the AI do the work without them as soon as possible.

Havoc|12 days ago

I find this difficult to reconcile with things like for example freelance translation being basically wiped out wholesale

Or even the simple utility of having a chatbot. They’re not popular because they’re useless

Which to me says it’s more likely that people under estimate corporate inertia

DonnyV|11 days ago

I think the reason tech didn't help productivity until the late 90s is pretty obvious. The internet was missing. Computers needed the internet to make them useful to everyone. So the question should be.

What is Ai missing that will make it useful to everyone?

rr808|12 days ago

BTW the study was from September 2024 to 2025, so its the very earliest of adopters.

d--b|12 days ago

CEOs have no clue what's going on at the IC level.

I bet many CEO PA are using AI for many tasks. It's typically a role where AI is very useful. Answering emails, moving meetings around, booking and buying a bunch of crap.

NVHacker|11 days ago

Isn't it a bit early to draw such conclusions ? We are just getting started with AI use, especially in tech / engineering teams, and have only scratched the surface with regards to what is possible.

nowittyusername|12 days ago

As we approach the singularity things will be more noisy and things will make less and less sense as rapid change can look like chaos from inside the system. I recommend folks just take a deep breath, and just take a look around you. Regardless on your stance if the singularity is real, if AI will revolutionize everything or not, just forget all that noise. just look around you and ask yourself if things are seeming more or less chaotic, are you able to predict better or worse on what is going to happen? how far can your predictions land you now versus lets say 10 or 20 years ago? Conflicting signals is exactly how all of this looks. one account is saying its the end of the world another is saying nothing ever changes and everything is the same as it always was....

giancarlostoro|11 days ago

I'm not sure how you even measure productivity going up or down, for many of us LLMs have allowed us to trim down the amount of effort required to scour through google search results.

yalogin|12 days ago

Every technology, whether it improved existing systems and productivity or not, created new wealth by creating new services and experiences. So that is what needs to happen with this wave as well.

AngryData|12 days ago

I think the biggest problem is calling it AI to start with. It gives people a huge misrepresentation of what it is actually capable of. It is an impressive tool with many uses, but it is not AGI.

kevincloudsec|11 days ago

most of these companies deployed microsoft copilot, watched it hallucinate meeting summaries for six months, and called that an AI strategy. source: current situation

h0ek|11 days ago

Yeah, maybe the CEO doesn’t see any impact on productivity — but mine definitely changed. I actually have more time for my own stuff now, because AI quietly handles part of the work for me. Of course, if productivity is measured by how many PowerPoint slides get presented to the board, then sure — nothing changed. Especially when HR reports say “everything looks the same” — because no one is tracking how much work is silently being offloaded to AI. And just to avoid overworking myself, I even asked AI to write this comment so I could focus on something else in the meantime.

izucken|11 days ago

At my current job I am in deep net LOC negative despite all new features... Somebody is getting fired and sued for stealing all these LOCs from the company...

transcriptase|12 days ago

Mentioning AI in an earnings call means fuck all when what they’re actually referring to is toggling on the permissions for borderline useless copilot features across their enterprise 365 deployments or being convinced to buy some tool that’s actually just a wrapper around API calls to a cheap/outdated OpenAI model with a hidden system prompt.

Yeah, if your Fortune 500 workplace is claiming to be leveraging AI because it has a few dozen relatively tech illiterate employees using it to write their em dash/emoji riddled emails about wellness sessions and teams invites for trivia events… there’s not going to be a noticeable uptick in productivity.

The real productivity comes from tooling that no sufficiently risk adverse pubco IS department is going to let their employees use, because when all of their incentives point to saying no to installing anything ever, the idea of giving the permissions required for agentic AI to do anything useful is a non-starter.

enraged_camel|12 days ago

This study spans 3 years, so it goes back to ChatGPT 3.5 era. Not sure how valid it is, considering the breakneck speed at which everything moves.

TimByte|12 days ago

General-purpose technologies tend to have long and uneven diffusion curves. The hype cycle moves faster than organizational change

trappist|12 days ago

"Admitted" as the verb in a statement like this is blatant editorialization. Did they just finally "admit" what they had been reluctant to reveal? No doubt with their heads hung in shame?

Maybe this bothers me more than it should.

jillesvangurp|12 days ago

I think the article is very premature. Lots of companies are slow to adapt. And while there are a lot of early adopters, there are way more people still not really adapting what they do.

There are some real changes in day to day software development. Programmers seem to be spending a lot of time prompting LLMs these days. Some more than others. But the trend is pretty hard to deny at this point. That snowballed in just 6-7 months from mostly working in IDEs to mostly working in Agentic coding tools. Codex was barely usable before the summer (I'm biased to that since that is what I use but it wasn't that far behind Claude Code). Their cli tool got a lot more usable in autumn and by Christmas I was using it more and more. The Desktop app release and the new model releases only three weeks ago really spiked my usage. Claude Code was a bit earlier but saw a similar massive increase in utility and usability.

It is still early days. This report cannot possibly take into account these massive improvements that hav been playing out over essentially just the last few months. This time last year, Agentic coding was barely usable. You had isolated early adopters of Claude Code, Cursor, and similar tools. Compare to what we have now, these tools weren't very good.

In the business world things are delayed much more. We programmers have the advantage that many/most of our tools are highly scriptable (by design) and easy to figure out for LLMs. As soon as AI coders figured out how to patch tool calling into LLMs there was this massive leap in utility as LLMs suddenly gained feedback loops based on existing tools that it could suddenly just use.

This has not happened yet for the vast majority of business tools. There are lots of permission and security issues. Proprietary tools that are hard to integrate with. Even things like wordprocessors, spreadsheets, presentation tools, and email/calendar tools remain poorly integrated. You can really see Apple, MS, and Google struggle with this. They are all taking baby steps here but the state of the art is still "copy this blob of text in your tool". Forget about it respecting your document theme, or structure. Agentic tool usage is not widely spread outside the software engineering community yet.

The net result is that the business world still has a lot of drudgery in the form of people manually copying data around between UIs that are mostly not accessible to agentic tools yet. Also many users aren't that tool savvy to begin with. It's unreasonable to expect people like that to be impacted a lot by AI this early in the game. There's a lot of this stuff that is in scope for automating with agentic tools. Most of it is a lot less hard than the type of stuff programmers already deal with in their lives.

Most of the effects this will have on the industry will play out over the next few years. We've seen nothing yet. Especially bigger companies will do so very conservatively. They are mostly incapable of rapid change. Just look at how slow the big trillion dollar companies are themselves with eating their own dog food. And they literally invented and bootstrapped most of this stuff. The rest of the industry is worse at this.

The good news is that the main challenges at this point are non technical: organizational lag, security practices, low level API/UI plumbing to facilitate agentic tool usage, etc. None of this stuff requires further leaps in AI model quality. But doing the actual work to make this happen is not a fast process. From proof of concept to reality is a slow process. Five years would be exceptionally fast. That might actually happen given the massive impact this stuff might have.

SilverElfin|12 days ago

These surveys don’t make sense. Ask the forward thinking companies and they’ll say the opposite. The flood of anti AI productivity articles almost feel like they’re meant to lull the population into not seeing what’s about to happen to employment.

amelius|12 days ago

In other words, everybody is benefiting from AI, except CEOs.

lqstuart|12 days ago

I was in the “AI is grossly overhyped” camp because I work on large distributed deep learning training jobs and AI is indeed worthless for those, and will likely always be worthless since the APIs change constantly and the iteration loop is too cumbersome to constantly resubmit broken jobs to a training cluster.

Then I started working on some basic grpc/fullstack crap that I absolutely do not care about, at all, but needs to be done and uses internal frameworks that are not well documented, and now Claude is my best friend at work.

The best part is everyone else’s AI code still sucks, because they ask it to do stupid crap and don’t apply any critical thinking skills to it, so I just tell AI to re-do it but don’t fuck up the error handling and use constants instead of hardcoding strings like a middle schooler, and now I’m a 100x developer fearlessly leading the charge to usher in the AI era as I play the new No Man’s Sky update on my other PC and wait for whatever agent to finish crap.

mulmen|11 days ago

How do they define productivity? How is it measured?