top | item 45618350

AI has a cargo cult problem

178 points| cs702 | 4 months ago |ft.com

139 comments

order
[+] tra3|4 months ago|reply
If I'm tired of one thing related to AI/llm/chatbots it's the claims that it's not useful. It 100% is. We have to separate the massive financial machinations from the actual tech.

Reading this article though, I'm questioning my decision to avoid hosting open source LLMs. Supposedly the performance of Owen-coder is comparable to the likes of Sonnet4. If I invest in a homelab that can host something like Qwen3 I'll recoup my costs in about 20 months without having to rely on Anthropic.

[+] mynameisash|4 months ago|reply
I don't think I've ever seen anyone say they're not useful. Rather, they don't appear to live up to the hype, and they're sure as hell not a panacea.

I'm pretty bearish on LLMs. I also think they're over-hyped and that the current frenzy will end badly (global economically speaking). Than said, sure, they're useful. Doesn't mean they're worth it.

[+] didibus|4 months ago|reply
> it's the claims that it's not useful

I think the reason is because it depends what impact metrics you want to measure. "Usefulness" is in the eye of the beholder. You have to decide what metric you consider "useful".

If it's company profit for example, maybe the data shows it's not yet useful and not having impact on profit.

If it's the level of concentration needed by engineers to code, then you probably can see that metric having improved as less mental effort is needed to accomplish the same thing. If that's the impact you care about, you can consider it "useful".

Etc.

[+] Octoth0rpe|4 months ago|reply
> It 100% is [useful]

It's worth disambiguating between "worth $50b of investment" useful versus "worth $1t of investment" useful

[+] criemen|4 months ago|reply
> Supposedly the performance of Owen-coder is comparable to the likes of Sonnet4. If I invest in a homelab that can host something like Qwen3 I'll recoup my costs in about 20 months without having to rely on Anthropic.

You can always try it via openrouter without investing in the home setup first. That allows you to evaluate whether it hits your quality bar or not, and is much cheaper. It is less fun than self-hosting though.

[+] silversmith|4 months ago|reply
The issue is that the field is still moving too fast - in 20 months, you might break even on costs, but the LLMs you are able to run might be 20 months behind "state of the art". As long as providers keep selling cheap inference, I'm holding out.
[+] mrbungie|4 months ago|reply
It's hell useful, I use Cursor several times a week (and I'm not even working as a dev full time rn), and ChatGPT is my daily driver.

Yet, it's weird to me that we're 3 years into this "revolution" and I can't get a decent slideshow from an LLM without having to practically build a framework for doing so.

[+] mrdependable|4 months ago|reply
They are useful, but I find it is only slightly more convenient than a Google search. Losing something like GPS on my phone would be a much bigger disruption to my life.
[+] arjie|4 months ago|reply
I used Qwen3-480B-Coder with Cerebras and it was not very good for my use case. You can run these models online first to see if they will work for you. I recommend you try that first.
[+] noosphr|4 months ago|reply
I had a hilarious exchange on here where I used an LLM to explain to a poster at length why they fundamentally didn't understand what I said. It did a bang up job. The poster, and a lot of other people, got mad I used AI and they still didn't understand my original post, or the AI explanation.

LLMs aren't terribly useful to people who fundamentally can't read. When those people can also type very fast you get the current situation.

[+] huevosabio|4 months ago|reply
The problem of self-hosting is that you increase the friction to swap models and use whatever is SOTA or whatever fits your purpose best.

Also, I've heard from others that the Qwen models are a bit too overfit to the benchmarks and that their real-life usage is not as impressive as they would appear on the benchmarks.

[+] reissbaker|4 months ago|reply
Qwen3 Coder unfortunately isn't on par with Sonnet, no matter what the benchmarks say. GLM-4.6 does feel pretty competitive though.

You'll need a pretty expensive home lab to run it though... I'd be surprised if you could do it at long context with only 20 months of Sonnet usage.

[+] dvfjsdhgfv|4 months ago|reply
> If I'm tired of one thing related to AI/llm/chatbots it's the claims that it's not useful.

That is the best example of straw argument I've seen this year. I enjoy reading discussions on LLMs and have seen a huge number of arguments, some reasonable and some ridiculous, but one thing I haven't seen is someone claiming that LLMs are not useful. We can discuss usefulness for a particular purpose, or the level of its fitness for it, but not the fact that millions of people find LLMs useful enough to pay for them.

[+] imiric|4 months ago|reply
> If I'm tired of one thing related to AI/llm/chatbots it's the claims that it's not useful. It 100% is. We have to separate the massive financial machinations from the actual tech.

It's indisputable that the tech is and can be very useful, but it's also surrounded by a bubble of grifters and opportunists riding the hype and money train.

The sooner we start ignoring the "AI", "ASI", "AGI", anthropomorphization, and every other snake oil these people are peddling, the sooner we can focus on practical applications of the tech, which are numerous.

[+] somewhereoutth|4 months ago|reply
> the claims that it's not useful

There are many credible claims that not only is it not useful, but that it is actually causing serious damage.

[+] ants_everywhere|4 months ago|reply
The other thing that's tiring is talking about how AI is a bubble as if that's an indictment of AI.

Being a bubble is a statement about the value of the stock market, not about the technology. There was a dotcom bubble, but that does not mean the internet wasn't valuable. And if you bought at the top of the dotcom bubble you'd be much wealthier now than you were when you bought. But it would have taken you a significant time to break even.

[+] zmmmmm|4 months ago|reply
> If I invest in a homelab that can host something like Qwen3 I'll recoup my costs in about 20 months without having to rely on Anthropic

For me it's equally that I don't trust any of these service providers to keep maintaining whatever service or model I'm relying on. Imagine if I build a whole entire process and then the bubble bursts and they either take away what I'm using or start charging outrageous amounts for it.

I feel we are well into the point where the base technology is useful enough and all the work is in how you implement and adapt it in to your process / workflow. A new model coming out that is 3% better is relatively meaningless compared to me figuring out how better to integrate what I already have which might give me a 20% bump for very little effort.

So at this point all I really want is stability in the tech so I can optimise everything else. Constant churn of hosted providers thrusting change at me every second day is actively harmful to my productive use of it at this point. Hence I want local models so I can just tune out the noise and focus on getting things done.

[+] moomin|4 months ago|reply
I want to ask ChatGPT to point to a behaviour described in the article that resembles cargo-culting with AI, but I don’t want to waste my future overlord’s time.
[+] johnohara|4 months ago|reply
Not sure "Cargo Cult" is an apt description. Feynman's description of Cargo Cult Science was predicated on the behavior of islanders building structures in expectation it would summon the planes, cargo, personnel, etc. that used the island during WWII.

Without a previous experience they would not have built anything.

There is no previous AI experience behind today's pursuit of the AI grail. In other words, no planes with cargo driving an expectation of success. Instead, the AI pursuit is based upon the probability of success, which is aptly defined as risk.

A correct analog would be the islanders building a boat and taking the risk of sailing off to far away shores in an attempt to procure the cargo they need.

[+] wmf|4 months ago|reply
Arguably AI is already "successful" in terms of funding and press coverage and that's what many people are chasing.
[+] blamestross|4 months ago|reply
Yeah, "cargo cult" is abused as a term. Those islanders were smarter than what is happening here.

We use it dismissively but "cargo cult" behaviour is entirely reasonable. You know an effect is possible, and you observe novel things corellating with it. You try them to test the causality. It looks silly when you know the lesson already, but it was intelligent and reasonable behaviour the entire way.

The current situation is bubble denial, not cargo culting. Blaming cargo culting is a mechanism of bubble denial here.

[+] smogcutter|4 months ago|reply
This is a good point as a tangent. “Cargo Cult” is a meaningful phrase for ritualizing a process without understanding it.

Debasing the phrase makes it less useful and informative.

It’s a cargo cult usage of “cargo cult”!

[+] sails|4 months ago|reply
I’m amazed they published it with such a poorly applied analogy.
[+] jasonthorsness|4 months ago|reply
Everyone has imperfect information; this isn't a cargo cult situation where it's massively asymmetric, this is more like when you see everyone else running, it's generally a good idea to start running too. But when that heuristic fails it fails in a pretty spectacular way.
[+] nextworddev|4 months ago|reply
Yes, this rally seems overextended. But investor sentiment - if anything - has already swung to very negative, which isn't ideal if you want it to crash.

Bubbles don't pop without indiscriminate euphoria (Private markets are a different story, but VCs are fked anyways). If anything, the prices have reflected less than 20% of Capex projections, so the market clearly thinks OpenAI / Stargate / FAANG's capex plans are BS.

p.s. if everyone thinks it's a bubble, it generally rallies even more..

[+] vonneumannstan|4 months ago|reply
>If anything, the prices have reflected less than 20% of Capex projections, so the market clearly thinks OpenAI / Stargate / FAANG's capex plans are BS.

I'd say if anything the market is massively underestimating the scale of their capex plans. These things are using as much electricity as small cities. They are well past breaking ground, the buildings are going up as we speak.

https://www.datacenterdynamics.com/en/news/openai-and-oracle...

https://x.com/sama/status/1947640330318156074/photo/1

There are dozens of these planned.

[+] jerf|4 months ago|reply
Cargo cult as a metaphor doesn't work here. That's for when the cargo culters don't understand what is going on, and attempt to imitate the actions without understanding or accuracy. AI investors understand what is going on and understand that this may be a bubble and they may lose their investment. We may disagree with them about the probabilities of such an outcome, perhaps even quite substantially, but that's not the same thing as thinking that if I just write some number-looking-squiggles on a piece of paper and slide it under the door of a building that looks like it has computers on it I will have a pool and a lambo when I get home. That's what "cargo cult" investing would look like.

The AI investors know what they are doing, by which I mean, if this is every bit the bubble some of us think it is and it pops as viciously as it possibly can and these investors lose everything from top to bottom, if they tried to say "I didn't know that could happen!" I simply wouldn't believe them and neither would anyone else. Of course they know it's possible. They may not believe it is likely, but they are 100% operating from a position of knowledge and understanding and taking actions that have a completely reasonable through-line to successfully achieving their goals. Indeed I'm sure some people have sufficiently cashed out of their positions or diversified them such that they have already completely succeeded; worries about the bubble are worries about a sector and a broad range of people but some individuals can and will come out of this successfully even if it completely detonates in the future. If nothing else the people simply drawing salaries against the bubble, even completely normal non-inflated ones, can be called net winners.

[+] llm_nerd|4 months ago|reply
The cargo cult metaphor is weak. If an article written in the year of our FSM 2025 describes Melanesian cargo cults to make a point, they're probably just copying a trope from other articles. Cargo culting, if you will, much like Melanesian cargo cults that would wear bamboo earpieces and...

Is it a gold rush? Absolutely. There is a massive FOMO and everyone is rushing to claim some land, while the biggest profiteers of all are ones selling the shovels and pick axes. It's all going to wash out and in the end a very small number of players will be making money, while everyone else goes bust.

While many people think the broadly described AI is overhyped, I think people are grossly underestimating how much this changes almost everything. Very few industries will be untouched.

[+] rjsw|4 months ago|reply
The author is an anthropologist, I think she knows the original meaning of "cargo cult".

The 'cult' behaviour described in the article is that of building big data centres without knowing how they will make money for the real business of the tech companies doing it. They have all bought AI startups but that doesn't mean that the management of the wider company understands it.

[+] saltcured|4 months ago|reply
Yeah, if cargo cult were applied aptly, it would be more for the folks who are all-in on using LLMs yet not really getting any net productivity boost. Those basically just LARPing a dream world, but with no tangible benefit compared to the Old Ways.
[+] hansonkd|4 months ago|reply
Yeah, Not seeing the connection to cargo cult unless AGI already appeared, offered us incredible bounty of benefits and then left, so we all created a religion in order to summon AGI back.
[+] micromacrofoot|4 months ago|reply
Tech has a cargo cult problem
[+] blackoil|4 months ago|reply
Tech has a winner takes all problem. All those billions are chasing trillions of valuation. Many will fail, but some will be ruling(metaphorically) the world
[+] burnt-resistor|4 months ago|reply
The problem is the self-reinforcing valuation entanglements that make NVIDIA have a market cap of 4.42 teradollars.. until Meta, Goog, Apple, and Microsoft develop their own custom NPUs or the fragile bubble bursts in some other way.

There's value here, but probably not as much as the market thinks... yet.

[+] gdulli|4 months ago|reply
Maybe it's human nature that has a cargo cult problem and AI is just the current flypaper?
[+] ctoth|4 months ago|reply
The only cargo cult behavior I see here is Tett's own journalism! She casually drops that same debunked "95% of companies see no AI revenue gains" figure[0] without tracing it to source, performing the ritual of citation while missing the actual mechanism that makes evidence valuable.

[0] https://aiascendant.com/p/why-95-of-ai-commentary-fails

[+] tim333|4 months ago|reply
>Either way, anyone engaged ... needs to ... read up on those Melanesian cargo cults.

So I had a look at Wikipedia and the cargo cults are not really as advertised:

>The first documented cargo cults were religious movements that foretold followers would imminently receive an abundance of (often Western) food and goods (the "cargo") brought by their ancestors.

>Cargo cults have a wide diversity of beliefs and practices, but typically (though not universally) include: charismatic prophet figures foretelling a coming cataclysm or utopia...

So we are talking a religion rather than mixing up correlation and causation.

The 'AI bubble' seems much more like the dot com bubble or 'railway mania' than a religious thing.

[+] waprin|4 months ago|reply
edit: made a goal to avoid pointless internet flame wars that I briefly lapsed from
[+] stego-tech|4 months ago|reply
From the perspective of AI critics like myself, HN is awash in posts showing what folks have done with AI or boosting AI PR pieces, while critics often get flagged and our submissions shunted away from the front page. AI Boosters claim that all this CAPEX will create a Utopia where nobody has to work anymore, economies grow exponentially forever, and societal ills magically disappear through the power of AGI. On the other side, a lot of AI Doomers point out the perils of circular financing, CAPEX investments divorced from reality, underlying societal and civilizational issues that will hinder any potential positive revolution from/by AI, and corporate valuations with no basis other than hype.

Where commenters like yourself trip themselves up is a staunch refusal to be objective in your observations. Nobody is doubting the excitement of new technologies and their potential, including LLMs; we doubt the validity of the claims of their proponents that these magic boxes will somehow cure all diseases and accelerate human civilization into the galactic sphere through automated R&D and production. When Op-Eds, bloggers, and commenters raise these issues, they’re brow-beaten, insulted, flagged, and shunted away from the front page as fast as humanly possible lest others start asking similar questions. While FT’s Op-Eds aren’t exactly stellar to begin with, and this one is similarly milquetoast at first glance, the questions and concerns raised remain both valid and unaddressed by AI Boosters like yourselves. Specifics are constantly nitpicked in an effort to discredit entire arguments, rather than address the crux of the grievance in a respectable manner; boosters frequently come off like a sleazy Ambulance-Chasing Lawyer on TV discrediting witnesses through bad-faith tactics.

Rather than bloviate about the glory of machine gods or whine about haters, actually try listening to the points of your opponents and addressing them in a respectful and honest manner instead of trying to find the proverbial weak point in the block tower. You - and many others - continue to willfully miss the forest for the specific tree you dislike within it, and that’s why this particular era in tech continues to devolve into toxicity.

At the end of the day, there is no possible way short of actual lived outcome for either side to prove their point as objectively correct. Though when one side spends their time hiding and smearing critique from their opponents instead of discussing it in good faith, that does not bode well for their position.

[+] ellg|4 months ago|reply
"shocking little amount of discussion"

are we reading the same website...

[+] matusp|4 months ago|reply
Fully agreed. The author also can't decide whether AI is a Ponzi scheme, a bubble, or a cargo cult; so let’s just use them all! It's just buzzwords without any real analysis beyond what is generally known about the field.
[+] halayli|4 months ago|reply
The paper claims that 95% of companies see no AI revenue gains, which seems like an outrageous blanket statement. The truth is likely somewhere in the middle.

The real issue here is a fundamental statistical and categorical error: the paper lumps all industries, company sizes, and maturity levels under the single umbrella of "companies" and applies one 95% figure across the board. This is misleading and potentially produces false conclusions.

How can anyone take this paper seriously when it makes such a basic mistake? Different industries have vastly different AI adoption curves, infrastructure requirements, and implementation timelines.

It's equally concerning that journalists are reporting on this without recognizing or questioning this methodological flaw.

[+] Yaina|4 months ago|reply
I think what we're seeing, and what the article describes, are company leaders across industries reacting to the AI hype by saying "we need AI too!" not because they've identified a specific problem it can solve, but because they want to appear innovative or cut labor costs.

Right now, the market values saying you're doing AI more than actually delivering meaningful results.

Most leaders don't seem to view AI as a practical tool to improve a process, but as a marketing asset. And let’s be honest: we're not talking about the broad field of machine learning here, but mostly about integrating LLMs in some form.

So coming back to the revenue claims: Greenhouse (the job application platform) for example now has a button to improve your interview summary. Is it useful? Maybe. Will it drastically increase revenue? Probably not. Does it raise costs? Yes; because behind the scenes they’re likely paying OpenAI processing fees for each request.

This is emblematic of most AI integrations I've seen: minor customer benefits paired with higher operational costs.

[+] fishmicrowaver|4 months ago|reply
It's not clear to me how much companies are even attempting to quantify the value of 'AI'. Having 'AI' is the value. It's similar to the Data Science / Machine Learning craze where managers decided that we must have ML, instead of considering it one among many capabilities, that may or may not be useful for a particular problem.
[+] molyss|4 months ago|reply
I think it's a bit disingenuous to reduce the article to a single sentence that's in parenthesis and links to a widely shared publication about an a MIT report. Especially when said article continues with "Don’t get me wrong: I am not denying the extraordinary potential of AI to change aspects of our world, nor that savvy entrepreneurs, companies and investors will win very big. It will — and they will."

One doesn't have to agree with the original report, but one can't in good faith deny that the whole thing smells of a financial scheme with circular contracts, massive investments for an industry that's currently losing money by the billion and unclear financial upside for most other companies out there.

I'm not saying AI is useless or that it will never be useful, I'm just saying that there are some legitimate reasons to worry about the amounts of money that are being poured into it and its potential impact on the economy at large. I believe the article is simply taking a similart stance