Not only has OpenAI's market share gone down significantly in the last 6mo, Nvidia has been using its newfound liquid funds to train its own family of models[1]. An alliance with OpenAI just makes less sense today than it did 6mo ago.
> Nvidia has been using its newfound liquid funds to train its own family of models
Nvidia has always had its own family of models, it's nothing new and not something you should read too much into IMHO. They use those as template other people can leverage and they are of course optimized for Nvidia hardware.
Nvidia has been training models in the Megatron family as well as many others since at least 2019 which was used as blueprint by many players. [1]
And the whole AI craze is becoming nothing but a commodity business where all kinds of models are popping in and out, one better this update, the other better the next update etc. In short - they're basically indistinguishable for the average layman.
Commodity businesses are price chasers. That's the only thing to compete on when product offerings are similar enough.
AI valuations are not setup for this. AI Valuations are for 'winner takes all' implications. These are clearly now falling apart.
Yeah. Even if OpenAI models were the best, I still wouldn't used them, given how the Sam Altman persona is despicable (constantly hyping, lying, asking for no regulations, then asking for regulations, leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims...). I know other companies are not better, but at least they have a business model and something to lose.
1. OpenAI bet largely on consumer. Consumers have mostly rejected AI. And in a lot of cases even hate it (can't go on TikTok or Reddit without people calling something slop, or hating on AI generated content). Anthropic on the other hand went all in on B2B and coding. That seems to be the much better market to be in.
> Anthropic relies heavily on a combination of chips designed by Amazon Web Services known as Trainium, as well as Google’s in-house designed TPU processors, to train its AI models. Google largely uses its TPUs to train Gemini. Both chips represent major competitive threats to Nvidia’s best-selling products, known as graphics processing units, or GPUs.
So which leading AI company is going to build on Nvidia, if not OpenAI?
"Largely" is doing a lot of heavy lifting here. Yes Google and Amazon are making their own GPU chips, but they are also buying as many Nvidia chips as they can get their hands on. As are Microsoft, Meta, xAI, Tesla, Oracle and everyone else.
Nvidia had the chance to build its own AI software and chose not to. It was a good choice so far, better to sell shovels than go to the mines - but they still could go mining if the other miners start making their own shovels.
If I were Nvidia I would be hedging my bets a little. OpenAI looks like it's on shaky ground, it might not be around in a few years.
That’s interesting, I didn’t know that about Anthropic. I guess it wouldn’t really make sense to compete with OpenAI and everyone else for Nvidia chips if they can avoid it.
It's almost as if everyone here was assuming that Nvidia would have no competition for a long time, but it has been known for a long time, there are many competitors coming after their data center revenues. [0]
> So which leading AI company is going to build on Nvidia, if not OpenAI?
It's xAI.
But what matters is that there is more competition for Nvidia and they bought Groq to reduce that. OpenAI is building their own chips as well as Meta.
The real question is this: What happens when the competition catches up with Nvidia and takes a significant slice out of their data center revenues?
This video that breaks down the crazy financial positions of all the AI companies and how they are all involved with one called CoreWeave (who could easily bring the whole thing tumbling down) is fascinating: https://youtu.be/arU9Lvu5Kc0?si=GWTJsXtGkuh5xrY0
I don’t think so. I think it is positioning for the unknown future and hedging.
For example, Amazon isn’t able to train its own models so it hedges by investing in Anthropic and OpenAI. Oracle, same with OpenAI deal. Nvidia wants to stay in OpenAI and Anthropic’s tech stack.
We know that it is all a grift before the inevitable collapse, so everyone is racing for the exit before that happens.
I guarrantee you that in 10 years time, you will get claims of unethical conduct by those companies only after the mania has ended (and by then the claimants have sold all their RSUs.)
It’s probably not really related, but this bug and the saga of OpenAI trying and failing to fix it for two weeks is not indicative of a functional company:
OTOH, if Anthropic did that to Claude Code, there wasn’t a moderately straightforward workaround, and Anthropic didn’t revert it quickly, it might actually be a risk-the-whole-business issue. Nothing makes people jump ship quite like the ship refusing to go anywhere for weeks while the skipper fumbles around and keeps claiming to have fixed the engines.
Also, the fact that it’s not major news that most business users cannot log in to the agent CLI for two weeks running is not major news suggests that OpenAI has rather less developer traction than they would like. (Personal users are fine. Users who are running locally on an X11-compatible distro and thus have DISPLAY set are okay because the new behavior doesn’t trigger. It kind of seems like everyone else gets nonsense errors out of the login flow with precise failures that change every couple days while OpenAI fixes yet another bug.)
I don't know what you're so surprised about. The ticket reads like any other typical [Big] enterprise ticket. UI works, headless - not (headless is what only hackers use, so not a priority, etc.) Oh, found the support guy who knows what headless is and the doc page with a number of workarounds. There is even ssh tunnel (how is that made in into enterprise docs?!) and the classic - copy logged in credentials from UI machine once you logged in there. Bla-bla-bla and again classic:
"Root Cause
The backend enforces an Enterprise-only entitlement for codex_device_code_auth on POST /backend-api/accounts/{account_id}/beta_features. Your account is on the Team plan, so the server rejects the toggle with {"detail":"Enterprise plan required."} "
and so on and so forth. At any given day i have several such long-term tickets that get ultimately escalated to me (i'm in dev and usually the guy who would pull the page with ssh tunnel or credentials copying :)
The article references an “undisciplined” business. I wonder if this is speaking to projects like Sora. Sora is technically impressive and was fun for a moment, but it’s nowhere near the cultural relevance of TikTok, but I believe significantly more expensive, harder to monetize, and consuming some significant share of their precious GPU capacity. Maybe I’m just not the demo and missing something.
And yes, Sam is incredibly unlikable. Every time I see him give an interview, I am shocked how poorly prepared he is. Not to mention his “ads are distasteful, but I love my supercar and ridiculous sunglasses.”
And Google and Microsoft have huge distribution advantages that OpenAI doesn’t. Google and Microsoft can add AI to their operating systems, browsers, and office apps that users are already using. OpenAI just has a website and a niche browser. To Google and Microsoft, AI is a feature, not a product.
this is the argument i continue to have with people. first mover isnt always an advantage - i think openai will be sold or pennies on these dollars someday (next 5 years after they run out of funding).
Google has data, TPUs, and a shitload of cash to burn
> He[Jensen Huang] has also privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic, some of the people said.
People talk about an AI bubble. What we actually have is a GPU bubble. NVidia makes really expensive GPUs for AI. Others also make GPUs.
Companies like Google produce and operate AI models largely using their own TPUs rather than NVidia's GPUs. We've seen the Chinese produce pretty competitive open models with either older NVidia GPUs or alternative GPUs because they are not allowed to buy the newer ones. And AMD, Intel and other chip makers are also eager to get in on the action. Companies like Microsoft, Amazon, etc. have their own chips as well (similar to Google). All the hyperscalers are moving away from NVidia.
And then Apple runs a non Intel and non NVidia based range of workstations and laptops that are pretty popular with AI researchers because the M series CPU/GPU/NPU is pretty decent value for running AI models. You see similar movement with ARM chips from Qualcomm and others. They all want to run AI models on phones, tablets, laptops. But without NVidia.
NVidia's bubble is about vastly overcharging for a thing that only they can provide. Their GPU chips have enormous margins relative to CPU chips coming out of the same/similar machines. That's a bubble. As soon as you introduce competition, the companies with the best price performance wins. NVidia is still pretty good at what they do. But not enough to justify an order of magnitude price/cost difference.
NVidia's success has been predicated on its proprietary software and instruction set (CUDA). That's a moat that won't last. The reason Google can use its own TPUs rather than CUDA is that it worked hard to get rid of their CUDA dependence. Same for the other hyperscalars. At this point they can do training and inference without CUDA/NVidia and its more cost effective.
The reason that this 100B deal is apparently being reconsidered is that it is a bad deal for OpenAI. It was going to overpay for a solution that they can get cheaper elsewhere. It's bad news for NVidia, good news for OpenAI. This deal started out with just NVidia. But at this point there are also deals with AMD, MS, and others. OpenAI like the other hyperscalers is not betting the company on NVidia/CUDA. Good for them.
I know OpenAI isn't a popular company here (anymore) but the doomerism in this thread seems a bit too hasty. People were just as doomy when Altman was sacked, and it turned into nothing and the industry market caps have doubled or even tripled since.
I felt anxious about all the insane valuations and spending around AI lately, and I knew it couldn't last (I mean there's only so much money, land, energy, water, business value, etc). But I didn't really know when it was going to collapse, or why. But recently I've been diving into using local models, and now it's way more clear. There seems to be a specific path for the implosion of AI:
- Nvidia is the most valuable company. Why? It makes GPUs. Why does that matter? Because AI is faster on them than CPUs, ASICs are too narrowly useful, and because first-mover advantage. AMD makes GPUs that work great for AI, but they're a fraction of the value of Nvidia, despite the fact that they make more useful products than Nvidia. Why? Nvidia just got there first, people started building on them, and haven't stopped, because it's the path of least resistance. But if Nvidia went away tomorrow, investors would just pour money into AMD. So Nvidia doesn't have any significant value compared to AMD other than people are lazy and are just buying the hot thing. Nvidia was less valuable than AMD before, they'll return there eventually; all AMD needs is more adoption and investment.
- Every frontier model provider out there has invested billions to get models to the advanced state they're in today. But every single time they advance the state of the art, open weights soon match them. Very soon, there won't be any significant improvement, and open weights will be the same as frontier, meaning there's no advantage to paying for frontier models. So within a few years, there will be no point to paying OpenAI, Anthropic, etc. Again, these were just first-movers in a commodity market. The value just isn't there. They can still provide unique services, tailored polished apps, etc (Anthropic is already doing this by banning users who have the audacity to use their fixed-price plans with non-Anthropic tools). But with AI code tools, anyone can do this. They are making themselves obsolete.
- The final form of AI coding is orchestrated agent-driven vibe-coding with safeguards. Think an insane asylum with a bowling league: you still want 100 people to autonomously (and in parallel) knock the pins knocked over, but you have to prevent the inmates from killing anyone. That's where the future of coding is. It's just too productive to avoid. But with open models and open source interfaces, anyone can do this, whether with hosted models (on any of 50 different providers), or a Beowulf cluster of cobbled together cheap hardware in a garage.
- Eventually, in like 5-10 years (a lifetime away), after AI Beowulfs have been a fad for a while, people will tire of it and move back to the cloud, where they can run any model they want on a K8s cluster full of GPUs, basically the same as today. Difference between now and then is, right now everyone is chasing Anthropic because their tools and models are slightly better. But by then, they won't be. Maybe people will use their tools anyway? But they won't be paying for their models. And it's not just price: one of the things you learn quickly by running models, is they're all good for different things. Not only that, you can tweak them, fine-tune them, and make them faster, cheaper, better than what's served up by frontier models. So if you don't care about the results or cost, you could use frontier, but otherwise you'll be digging deep into them, the same way some companies invest in writing their own software vs paying for it.
- Finally, there's the icing on the cake: LLMs will be cooked in 10 years. I keep reading from AI research experts that "LLMs are a dead end" - and it turns out it's true. LLMs are basically only good because we invest an unsustainable amount of money in the brute-forcing of a relatively dumb form of iteration: download all knowledge, do some mind-bogglingly expensive computational math on it, tweak the reasults, repeat. There's only so many of that loop you can do, because fundamentally, all you're doing is trying to guess your way to an answer from a picture of the past. It doesn't actually learn, the way a living organism learns, from experience, in real-time, going forward; LLMs only look backward. Like taking a snapshot of all the books a 6 year old has read, then doing tweaks to try to optimize the knowledge from those books, then doing it again. There's only so much knowledge, only so many tweaks. The sensory data of the lived experience of a single year of life of a 6 year old is many times more information than everything ever recorded by man. Reinforcement Learning actually gives you progressive, continuously improved knowledge. But it's slow, which is why we aren't doing it much. We do LLMs instead because we can speed-run them. But the game has an end, and it's the total sum of our recorded knowledge and our tweaks.
So LLMs will plateau, frontier models will make no sense, all lines of code will be hands-off, and Nvidia will return to making hardware for video games. All within about 10 years. With the caveat that there might be a shift in global power and economic stability that interrupts the whole game.... but that's where we stand if things keep on course. Personally, I am happy to keep using AI and reap the benefits of all these moronic companies dumping their money into it, because the open weights continue being useful after those companies are dead. But I'm not gonna be buying Nvidia stock anytime soon, and I'm definitely not gonna use just one frontier model company.
I've thought about this too.
I do agree that open source models look good and enticing, especially from a privacy standpoint.
But these solutions are always going to remain niche solutions for power users.
I'm not one of them.
I can't be hassled/bothered to setup that whole thing (local or cloud) to gain some privacy and end up with an inferior model and tool. Let's not forget about the cost as well!
Right now I'm paying for Claude and Gemini.
I run out of Claude tokens real fast, but I can just keep on going using Gemini/GeminiCLI for absolutely no cost it seems like.
The closed LLMs with the biggest amount of users will eventually outperform the open ones too, I believe.
They have a lot of closed data that they can train their next generation on.
Especially the LLMs that the scientific community uses will be a lot more valuable (for everyone).
So in terms of quality, the closed LLMs should eventually outperform the open ones, I believe, which is indeed worrisome.
I also felt anxious early december about the valuations, but, one thing remains certain.
Compute is in heavy demand, regardless of which LLM people use.
I can't go back to pre-AI. I want more and more and faster and faster AI.
The whole world is moving that way it seems like.
I'm invested into phsyical AI atm (chips, ram, ...) whose evaluations look decently cheap.
Well, they might have gotten a little wary from previous boom and bust cycles.
Perhaps they are a bit wary about the economic sustainability of the whole AI thing.
However, perhaps they also might be driven by greed at this point. Why not just constrain supply and increase margins whilst they are no real competitor?
Idk about this news specifically but oracle cds prices are moving. The below link says 30k layoffs may hit Oracle which I feel is a bit hyperbolic so this article may not be grounded in reality.
I would love it if AI fizzled out and nvidia had to go back to making gaming cards. Just trying to have a simple life here and play video games, and ridiculous hype after hype keeps making it expensive.
Important for what? Google and anthropic's models are already better, and google actually makes money, and both are US companies. What strategic relevance is there to Open AI?
Unrelated: does anyone else think that Jensen's gatorskin leather jacket at their latest conference didn't suit him at all? It felt very "witness my wealth" and out of character.
callan101|1 month ago
jjcm|1 month ago
[1] https://blogs.nvidia.com/blog/open-models-data-tools-acceler...
sailingparrot|1 month ago
Nvidia has always had its own family of models, it's nothing new and not something you should read too much into IMHO. They use those as template other people can leverage and they are of course optimized for Nvidia hardware.
Nvidia has been training models in the Megatron family as well as many others since at least 2019 which was used as blueprint by many players. [1]
[1] https://arxiv.org/abs/1909.08053
ulfw|1 month ago
Commodity businesses are price chasers. That's the only thing to compete on when product offerings are similar enough. AI valuations are not setup for this. AI Valuations are for 'winner takes all' implications. These are clearly now falling apart.
TheRoque|1 month ago
ryanSrich|1 month ago
1. OpenAI bet largely on consumer. Consumers have mostly rejected AI. And in a lot of cases even hate it (can't go on TikTok or Reddit without people calling something slop, or hating on AI generated content). Anthropic on the other hand went all in on B2B and coding. That seems to be the much better market to be in.
2. Sam Altman is profoundly unlikable.
aurareturn|1 month ago
funkyfiddler369|1 month ago
[deleted]
jt2190|1 month ago
> Anthropic relies heavily on a combination of chips designed by Amazon Web Services known as Trainium, as well as Google’s in-house designed TPU processors, to train its AI models. Google largely uses its TPUs to train Gemini. Both chips represent major competitive threats to Nvidia’s best-selling products, known as graphics processing units, or GPUs.
So which leading AI company is going to build on Nvidia, if not OpenAI?
paxys|1 month ago
Morromist|1 month ago
If I were Nvidia I would be hedging my bets a little. OpenAI looks like it's on shaky ground, it might not be around in a few years.
wmf|1 month ago
dylan604|1 month ago
mcintyre1994|1 month ago
rvz|1 month ago
> So which leading AI company is going to build on Nvidia, if not OpenAI?
It's xAI.
But what matters is that there is more competition for Nvidia and they bought Groq to reduce that. OpenAI is building their own chips as well as Meta.
The real question is this: What happens when the competition catches up with Nvidia and takes a significant slice out of their data center revenues?
[0] https://news.ycombinator.com/item?id=45429514
lofaszvanitt|1 month ago
raincole|1 month ago
papichulo2023|29 days ago
nick49488171|1 month ago
dfajgljsldkjag|1 month ago
kennyadam|1 month ago
chasd00|1 month ago
https://techcrunch.com/2026/01/26/nvidia-invests-2b-to-help-...
downrightmike|1 month ago
ruckfool|1 month ago
beloch|1 month ago
pinnochio|1 month ago
aurareturn|1 month ago
For example, Amazon isn’t able to train its own models so it hedges by investing in Anthropic and OpenAI. Oracle, same with OpenAI deal. Nvidia wants to stay in OpenAI and Anthropic’s tech stack.
It’s all jockeying for position.
rvz|1 month ago
I guarrantee you that in 10 years time, you will get claims of unethical conduct by those companies only after the mania has ended (and by then the claimants have sold all their RSUs.)
Drunkfoowl|1 month ago
[deleted]
amluto|1 month ago
https://github.com/openai/codex/issues/9253
OTOH, if Anthropic did that to Claude Code, there wasn’t a moderately straightforward workaround, and Anthropic didn’t revert it quickly, it might actually be a risk-the-whole-business issue. Nothing makes people jump ship quite like the ship refusing to go anywhere for weeks while the skipper fumbles around and keeps claiming to have fixed the engines.
Also, the fact that it’s not major news that most business users cannot log in to the agent CLI for two weeks running is not major news suggests that OpenAI has rather less developer traction than they would like. (Personal users are fine. Users who are running locally on an X11-compatible distro and thus have DISPLAY set are okay because the new behavior doesn’t trigger. It kind of seems like everyone else gets nonsense errors out of the login flow with precise failures that change every couple days while OpenAI fixes yet another bug.)
trhway|1 month ago
"Root Cause
The backend enforces an Enterprise-only entitlement for codex_device_code_auth on POST /backend-api/accounts/{account_id}/beta_features. Your account is on the Team plan, so the server rejects the toggle with {"detail":"Enterprise plan required."} "
and so on and so forth. At any given day i have several such long-term tickets that get ultimately escalated to me (i'm in dev and usually the guy who would pull the page with ssh tunnel or credentials copying :)
leptons|1 month ago
iLoveOncall|1 month ago
mrcwinn|1 month ago
And yes, Sam is incredibly unlikable. Every time I see him give an interview, I am shocked how poorly prepared he is. Not to mention his “ads are distasteful, but I love my supercar and ridiculous sunglasses.”
rchaud|29 days ago
andrewstuart|1 month ago
Microsoft has GitHub - the world’s biggest pile of code training data, plus infinite cash.
OpenAI has …… none of these advantages.
cpeterso|1 month ago
misiti3780|1 month ago
Google has data, TPUs, and a shitload of cash to burn
Handy-Man|1 month ago
jillesvangurp|1 month ago
Companies like Google produce and operate AI models largely using their own TPUs rather than NVidia's GPUs. We've seen the Chinese produce pretty competitive open models with either older NVidia GPUs or alternative GPUs because they are not allowed to buy the newer ones. And AMD, Intel and other chip makers are also eager to get in on the action. Companies like Microsoft, Amazon, etc. have their own chips as well (similar to Google). All the hyperscalers are moving away from NVidia.
And then Apple runs a non Intel and non NVidia based range of workstations and laptops that are pretty popular with AI researchers because the M series CPU/GPU/NPU is pretty decent value for running AI models. You see similar movement with ARM chips from Qualcomm and others. They all want to run AI models on phones, tablets, laptops. But without NVidia.
NVidia's bubble is about vastly overcharging for a thing that only they can provide. Their GPU chips have enormous margins relative to CPU chips coming out of the same/similar machines. That's a bubble. As soon as you introduce competition, the companies with the best price performance wins. NVidia is still pretty good at what they do. But not enough to justify an order of magnitude price/cost difference.
NVidia's success has been predicated on its proprietary software and instruction set (CUDA). That's a moat that won't last. The reason Google can use its own TPUs rather than CUDA is that it worked hard to get rid of their CUDA dependence. Same for the other hyperscalars. At this point they can do training and inference without CUDA/NVidia and its more cost effective.
The reason that this 100B deal is apparently being reconsidered is that it is a bad deal for OpenAI. It was going to overpay for a solution that they can get cheaper elsewhere. It's bad news for NVidia, good news for OpenAI. This deal started out with just NVidia. But at this point there are also deals with AMD, MS, and others. OpenAI like the other hyperscalers is not betting the company on NVidia/CUDA. Good for them.
partiallypro|1 month ago
mrcwinn|1 month ago
https://preview.redd.it/sam-altman-on-the-model-v0-7u2a2o7lr...
ozozozd|29 days ago
ChicagoDave|1 month ago
The tools on top of the models are the path and people building things faster is the value.
aurareturn|1 month ago
Those without models are hugely vulnerable to sudden rug pulls.
hahahahhaah|1 month ago
johnny_canuck|1 month ago
bravetraveler|1 month ago
0xbadcafebee|1 month ago
- Nvidia is the most valuable company. Why? It makes GPUs. Why does that matter? Because AI is faster on them than CPUs, ASICs are too narrowly useful, and because first-mover advantage. AMD makes GPUs that work great for AI, but they're a fraction of the value of Nvidia, despite the fact that they make more useful products than Nvidia. Why? Nvidia just got there first, people started building on them, and haven't stopped, because it's the path of least resistance. But if Nvidia went away tomorrow, investors would just pour money into AMD. So Nvidia doesn't have any significant value compared to AMD other than people are lazy and are just buying the hot thing. Nvidia was less valuable than AMD before, they'll return there eventually; all AMD needs is more adoption and investment.
- Every frontier model provider out there has invested billions to get models to the advanced state they're in today. But every single time they advance the state of the art, open weights soon match them. Very soon, there won't be any significant improvement, and open weights will be the same as frontier, meaning there's no advantage to paying for frontier models. So within a few years, there will be no point to paying OpenAI, Anthropic, etc. Again, these were just first-movers in a commodity market. The value just isn't there. They can still provide unique services, tailored polished apps, etc (Anthropic is already doing this by banning users who have the audacity to use their fixed-price plans with non-Anthropic tools). But with AI code tools, anyone can do this. They are making themselves obsolete.
- The final form of AI coding is orchestrated agent-driven vibe-coding with safeguards. Think an insane asylum with a bowling league: you still want 100 people to autonomously (and in parallel) knock the pins knocked over, but you have to prevent the inmates from killing anyone. That's where the future of coding is. It's just too productive to avoid. But with open models and open source interfaces, anyone can do this, whether with hosted models (on any of 50 different providers), or a Beowulf cluster of cobbled together cheap hardware in a garage.
- Eventually, in like 5-10 years (a lifetime away), after AI Beowulfs have been a fad for a while, people will tire of it and move back to the cloud, where they can run any model they want on a K8s cluster full of GPUs, basically the same as today. Difference between now and then is, right now everyone is chasing Anthropic because their tools and models are slightly better. But by then, they won't be. Maybe people will use their tools anyway? But they won't be paying for their models. And it's not just price: one of the things you learn quickly by running models, is they're all good for different things. Not only that, you can tweak them, fine-tune them, and make them faster, cheaper, better than what's served up by frontier models. So if you don't care about the results or cost, you could use frontier, but otherwise you'll be digging deep into them, the same way some companies invest in writing their own software vs paying for it.
- Finally, there's the icing on the cake: LLMs will be cooked in 10 years. I keep reading from AI research experts that "LLMs are a dead end" - and it turns out it's true. LLMs are basically only good because we invest an unsustainable amount of money in the brute-forcing of a relatively dumb form of iteration: download all knowledge, do some mind-bogglingly expensive computational math on it, tweak the reasults, repeat. There's only so many of that loop you can do, because fundamentally, all you're doing is trying to guess your way to an answer from a picture of the past. It doesn't actually learn, the way a living organism learns, from experience, in real-time, going forward; LLMs only look backward. Like taking a snapshot of all the books a 6 year old has read, then doing tweaks to try to optimize the knowledge from those books, then doing it again. There's only so much knowledge, only so many tweaks. The sensory data of the lived experience of a single year of life of a 6 year old is many times more information than everything ever recorded by man. Reinforcement Learning actually gives you progressive, continuously improved knowledge. But it's slow, which is why we aren't doing it much. We do LLMs instead because we can speed-run them. But the game has an end, and it's the total sum of our recorded knowledge and our tweaks.
So LLMs will plateau, frontier models will make no sense, all lines of code will be hands-off, and Nvidia will return to making hardware for video games. All within about 10 years. With the caveat that there might be a shift in global power and economic stability that interrupts the whole game.... but that's where we stand if things keep on course. Personally, I am happy to keep using AI and reap the benefits of all these moronic companies dumping their money into it, because the open weights continue being useful after those companies are dead. But I'm not gonna be buying Nvidia stock anytime soon, and I'm definitely not gonna use just one frontier model company.
Turfie|1 month ago
The closed LLMs with the biggest amount of users will eventually outperform the open ones too, I believe. They have a lot of closed data that they can train their next generation on. Especially the LLMs that the scientific community uses will be a lot more valuable (for everyone). So in terms of quality, the closed LLMs should eventually outperform the open ones, I believe, which is indeed worrisome.
I also felt anxious early december about the valuations, but, one thing remains certain. Compute is in heavy demand, regardless of which LLM people use. I can't go back to pre-AI. I want more and more and faster and faster AI. The whole world is moving that way it seems like. I'm invested into phsyical AI atm (chips, ram, ...) whose evaluations look decently cheap.
mordymoop|1 month ago
rwmj|1 month ago
nicman23|1 month ago
Turfie|1 month ago
baq|1 month ago
joshcsimmons|29 days ago
m000|1 month ago
random_duck|1 month ago
mattas|1 month ago
chasd00|1 month ago
https://www.theregister.com/2026/01/29/oracle_td_cowen_note/
Edit: Another src https://www.cio.com/article/4125103/oracle-may-slash-up-to-3...
CamperBob2|1 month ago
diabllicseagull|1 month ago
chrishare|1 month ago
caycep|1 month ago
qwerpy|1 month ago
unknown|1 month ago
[deleted]
klysm|1 month ago
moomoo11|1 month ago
system2|1 month ago
agduncan|1 month ago
StarterPro|1 month ago
random_duck|1 month ago
wigster|1 month ago
echelon|1 month ago
ajjahs|1 month ago
[deleted]
glass1122|29 days ago
[deleted]
nsjdkdkdk|1 month ago
[deleted]
whatever1|1 month ago
tartuffe78|1 month ago
batiudrami|1 month ago
this_user|1 month ago
radpanda|1 month ago
nunez|1 month ago
PunchyHamster|1 month ago