(no title)
lsy | 7 months ago
1. LLMs are a new technology and it's hard to put the genie back in the bottle with that. It's difficult to imagine a future where they don't continue to exist in some form, with all the timesaving benefits and social issues that come with them.
2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.
There are many technologies that have seemed inevitable and seen retreats under the lack of commensurate business return (the supersonic jetliner), and several that seemed poised to displace both old tech and labor but have settled into specific use cases (the microwave oven). Given the lack of a sufficiently profitable business model, it feels as likely as not that LLMs settle somewhere a little less remarkable, and hopefully less annoying, than today's almost universally disliked attempts to cram it everywhere.
Some comments were deferred for faster rendering.
strange_quark|7 months ago
I think this is a great analogy, not just to the current state of AI, but maybe even computers and the internet in general.
Supersonic transports must've seemed amazing, inevitable, and maybe even obvious to anyone alive at the time of their debut. But hiding under that amazing tech was a whole host of problems that were just not solvable with the technology of the era, let alone a profitable business model. I wonder if computers and the internet are following a similar trajectory to aerospace. Maybe we've basically peaked, and all that's left are optimizations around cost, efficiency, distribution, or convenience.
If you time traveled back to the 1970s and talked to most adults, they would have witnessed aerospace go from loud, smelly, and dangerous prop planes to the 707, 747 and Concorde. They would've witnessed the moon landings and were seeing the development of the Space Shuttle. I bet they would call you crazy if you told this person that 50 years later, in 2025, there would be no more supersonic commercial airliners, commercial aviation would basically look the same except more annoying, and also that we haven't been back to the moon. In the previous 50 years we went from the Wright Brothers to the 707! So maybe in 2075 we'll all be watching documentaries about LLMs (maybe even on our phones or laptops that look basically the same), and reminiscing about the mid-2020s and wondering why what seemed to be such a promising technology disappeared almost entirely.
kenjackson|7 months ago
A better example, also in the book, are skyscrapers. Each year they grew and new ones were taller than the ones last year. The ability to build them and traverse them increased each year with new technologies to support it. There wasn't a general consensus around issues that would stop growth (except at more extremes like air pressure). But the growth did stop. No one even has expectations of taller skyscrapers any more.
LLMs may fail to advance, but not because of any consensus reason that exists today. And it maybe that they serve their purpose to build something on top of them which ends up being far more revolutionary than LLMs. This is more like the path of electricity -- electricity in itself isn't that exciting nowadays, but almost every piece of technology built uses it.
I fundamentally find it odd that people seem so against AI. I get the potential dystopian future, which I also don't want. But the more mundane annoyance seems odd to me.
Earw0rm|7 months ago
Want to save people time flying? Solve the grotesque inefficiency pit that is airport transit and check-in.
Like, I'm sorry, STILL no high speed, direct to terminal rail at JFK, LAX and a dozen other major international airports? And that's before we get to the absolute joke of "border security" and luggage check-in.
Sure, supersonic afterburning engines are dope. But it's like some 10GHz single-core CPU that pulls 1.2kW out of the wall. Like it or not, an iPhone 16 delivers far more compute utility in far more scenarios.
codebolt|7 months ago
gniv|7 months ago
hilux|7 months ago
It's hard for me to believe that anyone who works with technology in general, and LLMs in particular, could think this.
Lu2025|7 months ago
Progress is often an S shaped curve and we are nearing saturation.
yieldcrv|7 months ago
its actually kind of scary to think of a world where generative AI in the cloud goes away due to costs, in favor of some other lesser chimera version that can't currently be predicted
but good news is that locally run generative AI is still getting better and better with fewer and fewer resources consumed to use
SJC_Hacker|7 months ago
The conspiracy theorist tells me the American aerospace manufacturers at the time (Boening, McDonnell-Douglas, etc.), did everything they could to kill the Concorde. With limited flyable routes (NYC and DC to Paris and London I think were the only ones), the financials didn't make sense. If overland routes were available, especially opening up LA, San Francisco and Chicago, it might have been a different story.
brokencode|7 months ago
That is a such a wild claim. People like the output of LLMs so much that ChatGPT is the fastest growing app ever. It and other AI apps like Perplexity are now beginning to challenge Google’s search dominance.
Sure, probably not a lot of people would go out and buy a novel or collection of poetry written by ChatGPT. But that doesn’t mean the output is unpleasant to consume. It pretty undeniably produces clear and readable summaries and explanations.
pera|7 months ago
While people seem to love the output of their own queries they seem to hate the output of other people's queries, so maybe what people actually love is to interact with chatbots.
If people loved LLM outputs in general then Google, OpenAI and Anthropic would be in the business of producing and selling content.
underdeserver|7 months ago
The people using ChatGPT like its output enough when they're the ones reading it.
The people reading ChatGPT output that other people asked for generally don't like it. Especially if it's not disclosed up front.
hattmall|7 months ago
ants_everywhere|7 months ago
Some people who hate LLMs are absolutely convinced everyone else hates them. I've talked with a few of them.
I think it's a form of filter bubble.
sejje|7 months ago
"Here's what chatGPT said about..."
I don't like that, either.
I love the LLM for answering my own questions, though.
xnx|7 months ago
Now that is a wild claim. ChatGPT might be challenging Google's dominance, but Perplexity is nothing.
tikhonj|7 months ago
johnnyanmac|7 months ago
satvikpendem|7 months ago
And how much of that is free usage, like the parent said? Even when users are paying, ChatGPT's costs are larger than their revenue.
JohnMakin|7 months ago
And this kind of meaningless factoid was immediately usurped by the Threads app release, which IMO is kind of a pointless app. Maybe let's find a more meaningful metric before saying someone else's claim is wild.
shpongled|7 months ago
Wowfunhappy|7 months ago
const_cast|7 months ago
> That is a such a wild claim.
I think when he said "consume" he meant in terms of content consumption. You know, media - the thing that makes Western society go round. Movies, TV, music, books.
Would I watch an AI generated movie? No. What about a TV show? Uh... no. What about AI music? I mean, Spotify is trying to be tricky with that one, but no. I'd rather listen to Remi Wolf's 2024 Album "Big Ideas", which I thought was, ironically, less inspired than "Juno" but easily one of the best albums of the year.
ChatGPT is a useful interface, sure, but it's not entertaining. It's not high-quality. It doesn't provoke thought or offer us some solace in times of sadness. It doesn't spark joy or make me want to get up and dance.
alonsonic|7 months ago
dbalatero|7 months ago
https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...
lordnacho|7 months ago
From where I'm standing, the models are useful as is. If Claude stopped improving today, I would still find use for it. Well worth 4 figures a year IMO.
dvfjsdhgfv|7 months ago
Actually, I'd be very curious to know this. Because we already have a few relatively capable models that I can run on my MBP with 128 GB of RAM (and a few less capable models I can run much faster on my 5090).
In order to break even they would have to minimize the operating costs (by throttling, maiming models etc.) and/or increase prices. This would be the reality check.
But the cynic in me feels they prefer to avoid this reality check and use the tried and tested Uber model of permanent money influx with the "profitability is just around the corner" justification but at an even bigger scale.
ehutch79|7 months ago
dkdbejwi383|7 months ago
Forgeties79|7 months ago
I imagine they would’ve flicked that switch if they thought it would generate a profit, but as it is it seems like all AI companies are still happy to burn investor money trying to improve their models while I guess waiting for everyone else to stop first.
I also imagine it’s hard to go to investors with “while all of our competitors are improving their models and either closing the gap or surpassing us, we’re just going to stabilize and see if people will pay for our current product.”
bbor|7 months ago
Funny seeing that comment on this post in particular, tho. When OP says “I’m not sure it’s a world I want”, I really don’t think they’re thinking about corporate revenue opportunities… More like Rehoboam, if not Skynet.
mc32|7 months ago
airstrike|7 months ago
827a|7 months ago
What's happening here is pretty clear to me: Its a form of enshittification. These companies are struggling to find a price point that supports both broad market adoption ($20? $30?) and the intelligence/scale to deliver good results ($200? $300?). So, they're nerfing cheap plans, prioritizing expensive ones, and pissing off customers in the process. Cursor even had to apologize for it [3].
There's a broad sense in the LLM industry right now that if we can't get to "it" (AGI, etc) by the end of this decade, it won't happen during this "AI Summer". The reason for that is two-fold: Intelligence scaling is logarithmic w.r.t compute. We simply cannot scale compute quick enough. And, interest in funding to pay for that exponential compute need will dry up, and previous super-cycles tell us that will happen on the order of ~5 years.
So here's my thesis: We have a deadline that even evangelists agree is a deadline. I would argue that we're further along in this supercycle than many people realize, because these companies have already reached the early enshitification phase for some niche use-cases (software development). We're also seeing Grok 4 Heavy release with a 50% price increase ($300/mo) yet offer single-digit percent improvement in capability. This is hallmark enshitification.
Enshitification is the final, terminal phase of hyperscale technology companies. Companies remain in that phase potentially forever, but its not a phase where significant research, innovation, and optimization can happen; instead, it is a phase of extraction. AI hyperscalers genuinely speedran this cycle thanks to their incredible funding and costs; but they're now showcasing very early signals of enshitifications.
(Google might actually escape this enshitification supercycle, to be clear, and that's why I'm so bullish on them and them alone. Their deep, multi-decade investment into TPUs, Cloud Infra, and high margin product deployments of AI might help them escape it).
[1] https://www.reddit.com/r/cursor/comments/1m0i6o3/cursor_qual...
[2] https://www.reddit.com/r/ClaudeAI/comments/1lzuy0j/claude_co...
[3] https://techcrunch.com/2025/07/07/cursor-apologizes-for-uncl...
unknown|7 months ago
[deleted]
erlend_sh|7 months ago
https://knightcolumbia.org/content/ai-as-normal-technology
https://news.ycombinator.com/item?id=43697717
highfrequency|7 months ago
> What eventually allowed gains to be realized was redesigning the entire layout of factories around the logic of production lines. In addition to changes to factory architecture, diffusion also required changes to workplace organization and process control, which could only be developed through experimentation across industries.
SirHumphrey|7 months ago
api|7 months ago
(1) Model capabilities will plateau as training data is exhausted. Some additional gains will be possible by better training, better architectures, more compute, longer context windows or "infinite" context architectures, etc., but there are limits here.
(2) Training on synthetic data beyond a very limited amount will result in overfitting because there is no new information. To some extent you could train models on each other, but that's just an indirect way to consolidate models. Beyond consolidation you'll plateau.
(3) There will be no "takeoff" scenario -- this is sci-fi (in the pejorative sense) because you can't exceed available information. There is no magic way that a brain in a vat can innovate beyond available training data. This includes for humans -- a brain in a vat would quickly go mad and then spiral into a coma-like state. The idea of AI running away is the information-theoretic equivalent of a perpetual motion machine and is impossible. Yudkowski and the rest of the people afraid of this are crackpots, and so are the hype-mongers betting on it.
So I agree that LLMs are real and useful, but the hype and bubble are starting to plateau. The bubble is predicated on the idea that you can just keep going forever.
ogogmad|7 months ago
ludicrousdispla|7 months ago
120+ Cable TV channels must have seemed like a good idea at the time, but like LLMs the vast majority of the content was not something people were interested in.
strangescript|7 months ago
AI is the opposite. There are numerous things it can do and numerous ways to improve it (currently). There is lower upfront investment than say a supersonic jet and many more ways it can pivot if something doesn't work out.
sumeno|7 months ago
digianarchist|7 months ago
peder|7 months ago
dcow|7 months ago
jayd16|7 months ago
eric-burel|7 months ago
Jensson|7 months ago
There are thousands of startups doing exactly that right now, why do you think this will work when all evidence points towards it not working? Or why else would it not already have revolutionized everything a year or two ago when everyone started doing this?
clarinificator|7 months ago
__loam|7 months ago
camillomiller|7 months ago
What does this EVEN mean? Do words have any value still, or are we all just starting to treat them as the byproduct of probabilistic tokens?
"Agent architectures". Last time I checked an architecture needs predictability and constraints. Even in software engineering, a field for which the word "engineering" is already quite a stretch in comparison to construction, electronics, mechanics.
Yet we just spew the non-speak "Agentic architectures" as if the innate inability of LLMs in managing predictable quantitative operations is not an unsolved issue. As if putting more and more of these things together automagically will solves their fundamental and existential issue (hallucinations) and suddenly makes them viable for unchecked and automated integration.
pydry|7 months ago
There are underserved areas of the economy but agentic startups is not one.
dvfjsdhgfv|7 months ago
For sure there is a portion of developers who don't care about the future, are not interested in current developements and just live as before hoping nothing will change. But the rest already gave it a try and realized tools like Claude Code can give excellent results for small codebases to fail miserably at more complex tasks with the net result being negative as you get a codebase you don't understand, with many subtle bugs and inconsistencies created over a few days you will need weeks to discover and fix.
mns|7 months ago
Which is basically what? The infinite monkey theorem? Brute forcing solutions for problems at huge costs? Somehow people have been tricked to actually embrace and accept that now they have to pay subscriptions from 20$ to 300$ to freaking code? How insane is that, something that was a very low entry point and something that anyone could do, is now being turned into some sort of classist system where the future of code is subscriptions you pay for companies ran by sociopaths who don't care that the world burns around them, as long as their pockets are full.
unknown|7 months ago
[deleted]
UncleOxidant|7 months ago
Msurrow|7 months ago
I agree with you, but I’m curious; do you have link to one or two concrete examples of companies pulling back investments, or rolling back an AI push?
(Yes it’s just to fuel my confirmation bias, but it’s still feels nice:-) )
0xAFFFF|7 months ago
magic_hamster|7 months ago
Another several unfounded claims were made here, but I just wanted to say LLMs with MCP are definitely good enough for almost every use case you can come up with as long as you can provide them with high quality context. LLMs are absolutely the future and they will take over massive parts of our workflow in many industries. Try MCP for yourself and see. There's just no going back.
ramoz|7 months ago
MCP isn’t inherently special. A Claude Code with Bash() tool can do nearly anything a MCP server will give you - much more efficiently.
Computer Use agents are here and are only going to get better.
The conversation shouldn’t be about LLMs any longer. Providers will be providing agents.
dontlikeyoueith|7 months ago
This just shows you lack imagination.
I have a lot of use cases that they are not good enough for.
nyarlathotep_|7 months ago
I'm genuinely surprised that Code forks and LLM cli things are seemingly the only use case that's approached viability. Even a year ago, I figured there'd be something else that's emerged by now.
alonsonic|7 months ago
I have a friend in finance that uses LLM powered products for financial analysis, he works in a big bank. Just now anthropic released a product to compete in this space.
Another friend in real estate uses LLM powered lead qualifications products, he runs marketing campaigns and the AI handles the initial interaction via email or phone and then ranks the lead in their crm.
I have a few friends that run small businesses and use LLM powered assistants to manage all their email comms and agendas.
I've also talked with startups in legal and marketing doing very well.
Coding is the theme that's talked about the most in HN but there are a ton of startups and big companies creating value with LLMs
philomath_mn|7 months ago
This is likely a selection bias: you only notice the obviously bad outputs. I have created plenty of outputs myself that are good/passable -- you are likely surrounded by these types of outputs without noticing.
Not a panacea, but can be useful.
MonkeyIsNull|7 months ago
I always think back to how Bezos and Amazon were railed against for losing money for years. People thought that would never work. And then when he started selling stuff other than books? People I know were like: please, he's desperate.
Someone, somewhere will figure out how to make money off it - just not most people.
alexpotato|7 months ago
Phase 1 - mid to late 1990s:
- "The Internet is going to change EVERYTHING!!!"
Phase 2 - late 1990s to early 2000s:
- "It's amazing and we are all making SO much money!"
- "Oh no! The bubble burst"
- "Of course everyone could see this coming: who is going to buy 40 lb bags of dogfood or their groceries over the Internet?!?!?"
Phase 3 - mid 2000s to 2020:
- "It is astounding the amount of money being by tech companies"
- "Who could have predicted that social media would change the ENTIRE landscape??"
gonzobonzo|7 months ago
You have top scientists like LeCun arguing this position. I'd imagine all of these companies are desperately searching for the next big paradigm shift, but no one knows when that will be, and until then they need to squeeze everything they can out of LLMs.
moffkalast|7 months ago
Granted the initial investment is immense, and the results are not guaranteed which makes it risky, but it's like building a dam or a bridge. Being in the age where bridge technology evolves massively on a weekly basis is a recipe for being wasteful if you keep starting a new megaproject every other month though. The R&D phase for just about anything always results in a lot of waste. The Apollo programme wasn't profitable either, but without it we wouldn't have the knowledge for modern launch vehicles to be either. Or to even exist.
I'm pretty sure one day we'll have an LLM/LMM/VLA/etc. that's so good that pretraining a new one will seem pointless, and that'll finally be the time we get to (as a society) reap the benefits of our collective investment in the tech. The profitability of a single technology demonstrator model (which is what all current models are) is immaterial from that standpoint.
wincy|7 months ago
dmix|7 months ago
What are you basing this on? Personal feelings?
fendy3002|7 months ago
LLMs is too trivial to be expensive
EDIT: I presented the statement wrongly. What I mean is the use case for LLM are trivial things, it shouldn't be expensive to operate
killerstorm|7 months ago
jsnell|7 months ago
lblume|7 months ago
trashchomper|7 months ago
jittery41|7 months ago
JimmaDaRustla|7 months ago
giancarlostoro|7 months ago
You hit the nail on why I say to much hatred from "AI Bros" as I call them, when I say it will not take off truly until it runs on your phone effortlessly, because nobody wants to foot a trillion dollar cloud bill.
Give me a fully offline LLM that fits in 2GB of VRAM and lets refine that so it can plug into external APIs and see how much farther we can take things without resorting to burning billions of dollars' worth of GPU compute. I don't care that my answer arrives instantly, if I'm doing the research myself, I want to take my time to get the correct answer anyway.
saratogacx|7 months ago
If you want to play around a bit and are on android there is PocketPal,ChatterUI, MyDeviceAI, SmolChat are good multi-model apps and Google's Edge gallery won't keep your chats but is a fun tech demo.
All are on github and can be installed using Obtainium if you don't want to
DSingularity|7 months ago
xnx|7 months ago
But have we ever had a general purpose technology (steam engine, electricity) that failed to change society?
blueflow|7 months ago
smrtinsert|7 months ago
Jach|7 months ago
Since you brought up supersonic jetliners you're probably aware of the startup Boom in Colorado trying to bring it back. We'll see if they succeed. But yes, it would be a strange path, but a possible one, that LLMs kind of go away for a while and try to come back later.
You're going to have to cite some surveys for the "most people agree that the output is trite and unpleasant" and "almost universally disliked attempts to cram it everywhere" claims. There are some very vocal people against LLM flavors of AI, but I don't think they even represent the biggest minority, let alone a majority or near universal opinions. (I personally was bugged by earlier attempts at cramming non-LLM AI into a lot of places, e.g. Salesforce Einstein appeared I think in 2016, and that was mostly just being put off by the cutesy Einstein characterization. I generally don't have the same feelings with LLMs in particular, in some cases they're small improvements to an already annoying process, e.g. non-human customer support that was previously done by a crude chatbot front-end to an expert system or knowledge base, the LLM version of that tends to be slightly less annoying.)
Jach|7 months ago
I don't think it supports the bits I quoted, but it does include more negativity than I would have predicted before seeing it.