top | item 47087574

(no title)

SirensOfTitan | 9 days ago

Regardless of the promise of the underlying technology, I do wonder about the long-term viability of companies like OpenAI and Anthropic. Not only are they quite beholden to companies like Nvidia or Google for hardware, but LLM tech as it stands right now will turn into a commodity.

It's why Amodei has spoken in favor of stricter export controls and Altman has pushed for regulation. They have no moat.

I'm thankful for the various open-weighted Chinese models out there. They've kept good pace with flagship models, and they're integral to avoiding a future where 1-2 companies own the future of knowledge labor. America's obsession with the shareholder in lieu of any other social consideration is ugly.

discuss

order

chasd00|9 days ago

I think google ends up the winner. They can keep chugging along and just wait for everyone else to go bankrupt. I guess apple sees it too since the signed with google and not OpenAI.

butlike|9 days ago

In addition to that, Google and Apple are demonstrated business partners. Google has consistently paid Apple billions to be the default search engine, so they have demonstrated they pay on time and are a known quantity. Imagine if OpenAI evaporated and Siri was left without a backend. It'd be too risky.

Aboutplants|9 days ago

The minute Apple chose Google, OpenAI became a dead duck. It will float for a while but it cannot compete with the likes of Google, their unlimited pockets and better yet their access to data

m-schuetz|9 days ago

Also Gemini works absolutely fantastic right now. I find it provides better results for coding tasks compared to ChatGPT

hansmayer|9 days ago

It is a rather attractive view, and I used to hold it too. However, seeing as Alphabet recently issued 100-year bonds to finance the AI CapEx bloat, means they are not that far off from the rest of the AI "YOLO"s currently jumping off the cliff ...

sethops1|9 days ago

This is the conclusion I came to as well. Either make your own hardware, or drown paying premiums until you run out of money. For a while I was hopeful for some competition from AMD but that never panned out.

alex1138|9 days ago

Now if only Google could a) drop its commitment to censorship and b) stop prioritizing Youtube links in its answers

xnx|9 days ago

In a few years this will amazingly alway have been obvious to everyone.

duped|9 days ago

Google has proven themselves to be incapable of monetizing anything besides ads. One should be deeply skeptical of their ability to bring consumer software to market, and keep it there.

piker|9 days ago

And what about Microsoft?

stego-tech|9 days ago

[deleted]

co_king_5|9 days ago

[deleted]

napolux|9 days ago

downvote all you want. google has all the money to keep up and just wait for the others to die. apple is a different story, btw, can probably buy openai or anthropic, but for now they're just waiting like google, and since they need to provide users AI after the failure with Apple Intelligence, they prefer to pay for Google and wait for the others to fight against each other.

openai and anthropic know already what will happen if they go public :)

neya|9 days ago

Google is the new Open AI. Open AI is the new Google. Guess who wants to shove advertisements into paying customers' face and take a % of their revenues for using their models to build products? Not Google.

munk-a|9 days ago

OpenAI is not viable. OpenAI is spending like Google without a warchest and they have essentially nothing to offer outside of brand recognition. Nvidia propping them up to force AI training to be done on their chips vs. google in-house cores is their only viable path forward. Even if they develop a strong model the commitments they've made are astronomically out of reach of all but the largest companies and AI has proven to be a very low moat market. They can't demand a markup sufficient to justify that spend - it's too trivial to undercut them.

Google/Apple/Nvidia - those with warchests that can treat this expenditure as R&D, write it off, and not be up to their eyeballs in debt - those are the most likely to win. It may still be a dark-horse previously unknown company but if it is that company will need to be a lot more disciplined about expenditures.

enceladus06|9 days ago

OpenAI and Anthropic don't have a moat. We will have actual open models like DeepSeek and Kimi with the same functionality as Opus 4.6 in Claude Code <6mo IMO. Competition is a good thing for the end-user.

zozbot234|9 days ago

The open-weight models are great but they're roughly a full year behind frontier models. That's a lot. There's also a whole lot of uses where running a generic Chinese-made model may be less than advisable, and OpenAI/Anthropic have know-how for creating custom models where appropriate. That can be quite valuable.

jnovek|9 days ago

I just did a test project using K2.5 on opencode and, for me, it doesn’t even come close to Claude Code. I was constantly having to wrangle the model to prevent it from spewing out 1000 lines at once and it couldn’t hold the architecture in its head so it would start doing things in inconsistent ways in different parts of the project. What it created would be a real maintenance nightmare.

It’s much better than the previous open models but it’s not yet close.

sigmar|9 days ago

>various open-weighted Chinese models out there. They've kept good pace with flagship models,

I don't think this is accurate. Maybe it will change in the future but it seems like the Chinese models aren't keeping up with actually training techniques, they're largely using distillation techniques. Which means they'll always be catching up and never at the cutting edge. https://x.com/Altimor/status/2024166557107311057

A_D_E_P_T|9 days ago

> they're largely using distillation techniques. Which means they'll always be catching up and never at the cutting edge.

You link to an assumption, and one that's seemingly highly motivated.

Have you used the Chinese models? IMO Kimi K2.5 beats everything but Opus 4.6 and Gemini 3.1... and it's not exactly inferior to the latter, it's just different. It's much better at most writing tasks, and its "Deep Research" mode is by a wide margin the best in the business. (OpenAI's has really gone downhill for some reason.)

arthurcolle|9 days ago

I have been using a quorum composed of step-3.5-flash, Kimi k2.5 and glm-5 and I have found it outperforms opus-4.5 at a fraction of the cost

That's pretty cutting edge to me.

EDIT: It's not a swarm — it's closer to a voting system. All three models get the same prompt simultaneously via parallel API calls (OpenAI-compatible endpoints), and the system uses weighted consensus to pick a winner. Each model has a weight (e.g. step-3.5-flash=4, kimi-k2.5=3, glm-5=2) based on empirically observed reliability.

The flow looks like:

  1. User query comes in
  2. All 3 models (+ optionally a local model like qwen3-abliterated:8b) get called in parallel
  3. Responses come back in ~2-5s typically
  4. The system filters out refusals and empty responses
  5. Weighted voting picks the winner — if models agree on tool use (e.g. "fetch this URL"), that action executes
  6. For text responses, it can also synthesize across multiple candidates
The key insight is that cheap models in consensus are more reliable than a single expensive model. Any one of these models alone hallucinates or refuses more than the quorum does collectively. The refusal filtering is especially useful — if one model over-refuses, the others compensate.

Tooling: it's a single Python agent (~5200 lines) with protocol-based tool dispatch — 110+ operations covering filesystem, git, web fetching, code analysis, media processing, a RAG knowledge base, etc. The quorum sits in front of the LLM decision layer, so the agent autonomously picks tools and chains actions. Purpose is general — coding, research, data analysis, whatever. I won't include it for length but I just kicked off a prompt to get some info on the recent Trump tariff Supreme Court decision: it fetched stock data from Benzinga/Google Finance, then researched the SCOTUS tariff ruling across AP, CNN, Politico, The Hill, and CNBC, all orchestrated by the quorum picking which URLs to fetch and synthesizing the results, continuing until something like 45 URLs were fully processed. Output was longer than a typical single chatbot response, because you get all the non-determinism from what the models actually ended up doing in the long-running execution, and then it needs to get consensus, which means all of the responses get at least one or N additional passes across the other models to get to that consensus.

  Cost-wise, these three models are all either free-tier or pennies per million tokens. The entire session above (dozens of quorum rounds, multiple web fetches) cost less than a single Opus prompt.

parliament32|9 days ago

Does that actually matter? If "catching up" means "a few months behind" at worst for.. free?

34679|9 days ago

I can't shake the feeling that the RAM shortage was intentionally created to serve as a sort of artificial moat by slowing or outright preventing the adoption of open weight models. Altman is playing with hundreds of billions of other people's dollars, trying to protect (in his mind) a multi-trillion dollar company. If he could spend a few billion to shut down access to the hardware people need to run competitor's products, why wouldn't he?

chasd00|9 days ago

From what I understand the RAM producers see the writing on the wall. They’re not going to invest in massively more capacity only to have it sit completely idle in 10 years.

RAM shortage is probably a bubble indicator itself. That industry doesn’t believe enough in the long term demand to build out more capacity.

zozbot234|9 days ago

It's very difficult to "intentionally create" a real shortage. You can hoard as much as you want, but people will expect you to dump it all right back onto the market unless you really have a higher-value use for the stuff you hoarded (And then you didn't intentionally create anything, you just bought something you needed!).

Plus producers will now feel free to expand production and dump even more onto the market. This is great if you needed that amount of supply, but it's terrible if you were just trying to deprive others.

tmaly|9 days ago

Hard drives and GPUs seem to be facing the same fate.

llm_nerd|9 days ago

Anthropic, at least, has gone to lengths to avoid hardware lock-in or being open to extortion of the nvidia variety. Anthropic is running their models on nvidia GPUs, but also Amazon Trainium and Google's TPUs. Massive scale-outs on all three, so clearly they've abstracted their operations enough that they aren't wed to CUDA or anything nvidia-specific.

Similarly, OpenAI has made some massive investments in AMD hardware, and have also ensured that they aren't tied to nvidia.

I think it's nvidia that has less of a moat than many imagine they do, given that they're a $4.5T company. While small software shops might define their entire solution via CUDA, to the large firms this is just one possible abstraction engine. So if an upstart just copy pastes a massive number of relatively simple tensor cores and earns their business, they can embrace it.

tinyhouse|9 days ago

Openai is just playing catchup at this point, they completely lost thier way in my view.

Anthropic on the other hand is very capable and given the success of claude code and cowork, I think they will maintain their lead across knowledge work for a long time just by having the best data to keep improving their models and everything around. It's also the hottest tech conpany rn, like Google were back in the day.

If I need to bet on two companies that will win the AI race in the west, it's Anthropic and Google. Google on the consumer side mostly and Anthropic in enterprise. OpenAI will probably IPO soon to shift the risk to the public.

chasd00|9 days ago

If anthropic continues getting their foot in the enterprise door then maybe they can tap into enterprise cloud spending. If Athropic can come up with services and things (db, dns, networking, webservers, etc) that claudecode will then prefer then maybe they become a cloud provider. To me, and I am no business expert btw, that could be a path to sustainable financials.

Edit: one thing I didn’t think about is Anthropic more or less runs at the pleasure of AWS. Of Amazon sees Anthropic as a threat to AWS then it could be lights out.

AznHisoka|9 days ago

Anthropic at least seems to be doing well with enterprises. OpenAI doesnt have that level of trust with enterprise use cases, and commodization is a bigger issue with consumers, when they can just switch to another tool easily

SirensOfTitan|9 days ago

Yeah, Anthropic is inarguably in a better position, but I don’t see how they justify their fundraising unless they find some entrenched position that is difficult for competitors to replicate.

Enterprise switching costs aren’t 0, but they’re much less than most other categories, especially as models mature and become more fungible.

The best moat I can think of is a patentable technique that facilitates a huge leap that Anthropic can defend, but even then, Chinese companies could easily ignore those patents. And I don’t even know if AI companies could stick to those guns as their training is essentially theft of huge portions of copyrighted material.

wejwej|9 days ago

To take the other side of this, as computers got commodified there still was a massive benefit to using cloud computing. Could it be possible that that happens with LLMs as well as hardware becomes more and more specialized? I personally have no idea but love that there’s a bunch of competition and totally agree with your point regulation and export controls are just ways to make it harder for new orgs to compete.

idopmstuff|9 days ago

I do think the models themselves will get commoditized, but I've come around to the opinion that there's still plenty of moat to be had.

On the user side, memory and context, especially as continual learning is developed, is pretty valuable. I use Claude Code to help run a lot of parts of my business, and it has so much context about what I do and the different products I sell that it would be annoying to switch at this point. I just used it to help me close my books for the year, and the fact that it was looking at my QuickBooks transactions with an understanding of my business definitely saved me a lot of time explaining.

On the enterprise side, I think businesses are going to be hesitant to swap models in and out, especially when they're used for core product functionality. It's annoying to change deterministic software, and switching probabilistic models seems much more fraught.

ahussain|9 days ago

People were saying the same last year, and then Anthropic launched Claude Code which is already at a $2.5B revenue run rate.

LLMs are useful and these companies will continue to find ways to capture some of the value they are creating.

ulfbert_inc|9 days ago

>LLM tech as it stands right now will turn into a commodity

I am yet to see in-depth analysis that supports this claim

otabdeveloper4|9 days ago

It's already a commodity. The strongest use case is self-hosted pornography generation.

lvl155|9 days ago

Think LLM by itself is basically a commodity at this point. Not quite interchangeable but it’s more of artistic differences rather than technological. I used to think it was data and that would give companies like Google a leg up.

delaminator|9 days ago

Anthropic is also using lots of Amazon hardware for inference.

techpression|9 days ago

How is censorship / ”alternative information” affecting them? Genuinely curious as I’ve only read briefly about it and it was ages ago.

whynotmaybe|9 days ago

I've tried deepseek a few months ago and asket about the Tiananmen square protests and massacre.

At first the answer was "I can't say anything that might hurt people" but with a little persuasion it went further.

The answer wasn't the current official answer but way more nuanced that Wikipedia's article. More in the vein of "we don't know for sure", "different versions", "external propaganda", "some officials have lied and been arrested since"

In the end, when I asked whether I should trust the government or ask for multiple source, it strongly suggested to use multiple sources to form an opinion.

= not as censored as I expected.

jpalomaki|9 days ago

Both Anthropic and OpenAI are working hard to move away from being "just" the LLM provider on the background.

KoolKat23|9 days ago

Anthropic I feel will be alright. They have their niche, it's good and people actually do pay for their services. Why do people still use salesforce when there's other free CRM's. They also haven't from what I can tell scaled for some imaginary future growth.

OpenAI I'm sorry to say are all over the place. They're good at what they do, but they try to do too much and need near ponzi style growth to sustain their business model.

nvarsj|9 days ago

I don't think you can put OpenAI and Anthropic together like that.

Anthropic has actually cracked Agentic AI that is generally useful. No other company has done that.

999900000999|9 days ago

They'll ban Chinese models, or do something like calling them security risks without proof.

Enterprise customers will gladly pay 10x to 20x for American models. Of course this means American tech companies will start to fall behind, combined with our recent Xenophobia.

Almost all the top AI researchers are either Chinese nationals or recent immigrants. With the way we've been treating immigrants lately ( plenty of people with status have been detained, often for weeks), I can't imagine the world's best talent continuing to come here.

It's going to be an interesting decade y'all.