top | item 47193606

Tell HN: 3 months ago we feared AI was useless. Now we fear it will take our job

11 points| giuliomagnifico | 1 day ago

I was listening to the latest episode of the WSJ podcast (https://www.wsj.com/podcasts/the-journal/the-ai-economic-doomsday-report-that-shook-wall-street/d9b12d37-a743-4a8c-afb6-2488aa9e812f) and what puzzles me is how 2–3 months ago the market feared that the “AI bubble” from tech companies’ trillions of dollars in CAPEX spending would turn out to be useless because AI seemed to have little or no real use. Indeed, after every earnings report with high CAPEX, the stocks dropped.

Now (over the past 10–15 days) the fear seems to have flipped: that AI will replace programmers, videogame developers, financial advisors, and other similar professions, and companies connected to those sectors are dropping (see the S&P Software & Services Select Industry Index https://www.spglobal.com/spdji/en/indices/equity/sp-software-services-select-industry-index/#overview, -20% since the beginning of the year).

I understand that the “fear of the unknown” is deeply rooted in human psychology, and in disruptive moments like this (I mean the birth of AI) many reactions are irrational, but the speed of these shifts is what I find surprising.

What do you think about the situation in the next few months? What could be the reason for the next drop? It almost seems like people are looking for a justification for selling, rather than selling because of a specific reason.

26 comments

order

59nadir|1 day ago

I don't think a lot of people are really worried that LLMs will successfully replace them, but they might still get let go because the people in charge think they can replace people with LLMs. These two scenarios don't imply the same level of confidence in LLMs at all.

What people who know nothing about creation/production think only matters in the short term, and over a long enough time frame they will be proven wrong.

I've used LLMs via agents and chat for what I do and I have zero confidence that it could be a productive part of a team without a very knowledgeable handler that knows exactly what they want and how to correct errant output... Meaning you'll still have to hire an engine programmer in order to get a game engine, then you can pretend that they'll have to use a LLM to get their work done (but given that the "you" in this scenario is completely out of the loop when it comes to production you wouldn't be able to tell that they did all their work manually, except perhaps if you notice that velocity went up, bug count went down, and there was more confidence when it came to estimations).

elcritch|1 day ago

> the people in charge think they can replace people with LLMs

Additionally using it as a pretext to fire lots of workers like Amazon and others seem to have been doing. Some friends mentioned their companies using it as a way to offshore to cheaper locales while getting less bad press.

lousken|1 day ago

> the people in charge think they can replace people with LLMs

I am not even sure if they even think that. It can be a placeholder for any other reason

kcplate|1 day ago

> without a very knowledgeable handler that knows exactly what they want and how to correct errant output...

For now, but I think the problem will become that we will soon start undercutting the bench and the rookie technologists, which in the future will eliminate the “very knowledgeable handlers” or make them exceedingly rare.

I am of the opinion that AI will improve and makes some great leaps for society and then gradually start to enshitify literally everything because we will no longer have people able to second guess the AI and keep it in check.

epolanski|1 day ago

I don't see it discussed very often, maybe because we're tech-companies concentrated here, but I can tell you 100% that in Italy-Poland, every single 100+ people non-IT company is aggressively pushing AI down their employees.

In Italian banking and insurance companies it's all about writing Gemini "gems" (essentially custom agents) and leveraging NotebookLM, occasionally Microsoft Copilot. Every innovation department out there is all about promoting and bonusing employees that can show the best savings in time and efficiency through LLMs.

So far I'm not seeing much success, because the people shoving those are mostly clueless about what LLMs are good at, they are desperately looking to be able to show that "anything" went from X hours of effort to X/2 or better and this pressure more often than not is alienating most employees, not because they don't appreciate AI, but because atm it's mostly an _additional_ task on top of their already existing work.

I myself, as an independent consultant I'm tasked by all my clients to automate and automate and bring the tools as close as possible to stakeholders, effectively making myself redundant at least on the software side (albeit I like to think not on the engineering and processes one, which is why I have the same clients since 2022...).

altmanaltman|1 day ago

Isn't this superbly stupid, though? Like if the users don't even know how LLMs work or what they are good at, why are they being forced to find new ways? Is it just FOMO? Surely a better way would be to allow expert researchers/app developers create AI apps that work for niche use-cases and have domain-appropriate guardrails etc, right? And then everyone (including non technical people) can use it and improve productivity or whatever

It's like forcing someone who has never driven a car to figure out how to make it go faster

tjansen|1 day ago

Over the last 12 months, AI agents have become dramatically better. And in the last 3 months, they have reached a point where, with some light guidance, they can write 100% of the code. Most skeptics have been convinced and are now realizing the impact. That's what you see in the stock market.

I don't know where the ceiling is. And how much of the improvement was due to better context engineering, and how much to better models. I would expect the context engineering to plateau very soon. Not sure about the models.

An even more dramatic change for the whole economy will be when non-IT, non-creative office clerks are replaced. This is mostly a matter of redesigning the interfaces around them. AI could probably do already most of the work, but getting the tasks to the AI, using their output, and communication with third parties are still a major challenge. Like someone processing insurance claims. AI needs a way to get the claim, to contact third parties (write emails to humans, communicate with other AI agents, maybe even call humans), and then to initiate the payout. It's already doable with today's technology, but still a lot of work.

latexr|1 day ago

Everything you’ve listed, both criticisms and hype, have been true ever since these tools were introduced. I don’t recall a single week going by without reading takes of it being a bubble or replacing everyone.

giuliomagnifico|1 day ago

Sure but “SaaS Apocalypse” is a new thing, also the “selling on high CAPEX expenditures”.

I mean to say that a year ago there was talk on forums of "fear of AI replacing developers" but companies were not losing 20/30% in one day because of this.

Now, besides talking about it among nerds, the situation is having a real impact in the economic/financial world.

javier2|1 day ago

4 months ago I was incredibly dismissive. After having used Claude Code extensively since then, I think these LLM tools definitively has a a place in software development, but with every new tool in software development, the floor has been raised for what can be completed with less resources. I'm more worried for the junior engineers coming in now.

kzahel|1 day ago

Why would you be worried about junior engineers? I see this expressed a lot. It seems kind of condescending to me. It's just a different build toolchain. We can build faster, and having a lot of experience helps you know how things should fit together. People figure shit out. There are plenty of juniors that are way smarter than you or I. Do you mean like a junior who is not as clever as you will have a hard time getting their foot in?

jleyank|1 day ago

AI and RTO are wonderful tools to get rid of workers with little short-term hassle or expense. If things go pear shaped in 6 months, it’ll be somebody else’s fault…. Going to suck being in a world of brittle, bloated software that few (if any) know how to fix without regenerating entirely new code by asking with a changed prompt.

Save your old machines that run old software. Use them to debug virtual machines that will let you continue. Or, reduce the software overhand of your business as much as possible to minimize damage.

Miladyshady|1 day ago

The idea that AI will replace programmers has been around since the emergence of AI. I do not know what the future holds. But I know that using in AI in software engineering reduces productivity by almost 20%. My point is, one tries to distinguish fact from opinion.

Source: https://arxiv.org/abs/2507.09089

giuliomagnifico|1 day ago

"The idea" is one thing, seeing it concretely, seeing companies laying off 40% of workers or losing 20% in a stock exchange session is another.

> But I know that using in AI in software engineering reduces productivity by almost 20%.

So why are these companies losing billions in a few months?!

Are the big hedge funds stupid or is a pre-print not considered reliable?

stavros|1 day ago

One thing has always been constant throughout, though: It's always about the stock market.

giuliomagnifico|1 day ago

Yes, but this has a real effect on the economy. Additionally, it's not solely related to the stock market. Look at the announcement from Bloc yesterday about cutting 40% of their workforce due to AI (in example), or the memory shortage due to AI datacenters on the other side.

simianwords|1 day ago

I have the same feeling but i don't think its a change that you observed in 3 months. Rather its different people's opinions being highlighted at different times.

What is a bit irrational is something I have noticed - that the same people claim that AI is both a bubble but also fear job losses from AI. But also think that the billionaires get rich out of this. How all of these things can happen together, I don't know.

Its just likely that people can't deal with uncertainty and fear change - they would resort to opposing change with arguments from all dimensions even if they contradict eachother.

techjamie|1 day ago

> [...] the same people claim that AI is both a bubble but also fear job losses from AI. But also think that the billionaires get rich out of this. How all of these things can happen together, I don't know.

One can believe the thing is a bubble, while also acknowledging the existential fear that if it isn't, then it might come and basically ruin your life. It's a balancing scale, and one side is weighted much more heavily than the other. Plus, as we've already seen, some executives are really jumping on the bandwagon and using AI as an excuse for massive layoffs, and finding a job in the current market unless you're particularly valuable is difficult.

So in that last case, AI can be a useless bubble that takes your job anyway because of trigger-happy CEOs.

sph|1 day ago

Meh, I have been planning my exit from this career a year ago, and every month I am more and more convinced I am making the right choice.