(no title)
SirensOfTitan | 9 days ago
It's why Amodei has spoken in favor of stricter export controls and Altman has pushed for regulation. They have no moat.
I'm thankful for the various open-weighted Chinese models out there. They've kept good pace with flagship models, and they're integral to avoiding a future where 1-2 companies own the future of knowledge labor. America's obsession with the shareholder in lieu of any other social consideration is ugly.
chasd00|9 days ago
butlike|9 days ago
Aboutplants|9 days ago
m-schuetz|9 days ago
hansmayer|9 days ago
sethops1|9 days ago
alex1138|9 days ago
xnx|9 days ago
duped|9 days ago
piker|9 days ago
stego-tech|9 days ago
[deleted]
co_king_5|9 days ago
[deleted]
napolux|9 days ago
openai and anthropic know already what will happen if they go public :)
neya|9 days ago
munk-a|9 days ago
Google/Apple/Nvidia - those with warchests that can treat this expenditure as R&D, write it off, and not be up to their eyeballs in debt - those are the most likely to win. It may still be a dark-horse previously unknown company but if it is that company will need to be a lot more disciplined about expenditures.
enceladus06|9 days ago
zozbot234|9 days ago
jnovek|9 days ago
It’s much better than the previous open models but it’s not yet close.
sigmar|9 days ago
I don't think this is accurate. Maybe it will change in the future but it seems like the Chinese models aren't keeping up with actually training techniques, they're largely using distillation techniques. Which means they'll always be catching up and never at the cutting edge. https://x.com/Altimor/status/2024166557107311057
A_D_E_P_T|9 days ago
You link to an assumption, and one that's seemingly highly motivated.
Have you used the Chinese models? IMO Kimi K2.5 beats everything but Opus 4.6 and Gemini 3.1... and it's not exactly inferior to the latter, it's just different. It's much better at most writing tasks, and its "Deep Research" mode is by a wide margin the best in the business. (OpenAI's has really gone downhill for some reason.)
arthurcolle|9 days ago
That's pretty cutting edge to me.
EDIT: It's not a swarm — it's closer to a voting system. All three models get the same prompt simultaneously via parallel API calls (OpenAI-compatible endpoints), and the system uses weighted consensus to pick a winner. Each model has a weight (e.g. step-3.5-flash=4, kimi-k2.5=3, glm-5=2) based on empirically observed reliability.
The flow looks like:
The key insight is that cheap models in consensus are more reliable than a single expensive model. Any one of these models alone hallucinates or refuses more than the quorum does collectively. The refusal filtering is especially useful — if one model over-refuses, the others compensate.Tooling: it's a single Python agent (~5200 lines) with protocol-based tool dispatch — 110+ operations covering filesystem, git, web fetching, code analysis, media processing, a RAG knowledge base, etc. The quorum sits in front of the LLM decision layer, so the agent autonomously picks tools and chains actions. Purpose is general — coding, research, data analysis, whatever. I won't include it for length but I just kicked off a prompt to get some info on the recent Trump tariff Supreme Court decision: it fetched stock data from Benzinga/Google Finance, then researched the SCOTUS tariff ruling across AP, CNN, Politico, The Hill, and CNBC, all orchestrated by the quorum picking which URLs to fetch and synthesizing the results, continuing until something like 45 URLs were fully processed. Output was longer than a typical single chatbot response, because you get all the non-determinism from what the models actually ended up doing in the long-running execution, and then it needs to get consensus, which means all of the responses get at least one or N additional passes across the other models to get to that consensus.
parliament32|9 days ago
34679|9 days ago
chasd00|9 days ago
RAM shortage is probably a bubble indicator itself. That industry doesn’t believe enough in the long term demand to build out more capacity.
zozbot234|9 days ago
Plus producers will now feel free to expand production and dump even more onto the market. This is great if you needed that amount of supply, but it's terrible if you were just trying to deprive others.
tmaly|9 days ago
llm_nerd|9 days ago
Similarly, OpenAI has made some massive investments in AMD hardware, and have also ensured that they aren't tied to nvidia.
I think it's nvidia that has less of a moat than many imagine they do, given that they're a $4.5T company. While small software shops might define their entire solution via CUDA, to the large firms this is just one possible abstraction engine. So if an upstart just copy pastes a massive number of relatively simple tensor cores and earns their business, they can embrace it.
tinyhouse|9 days ago
Anthropic on the other hand is very capable and given the success of claude code and cowork, I think they will maintain their lead across knowledge work for a long time just by having the best data to keep improving their models and everything around. It's also the hottest tech conpany rn, like Google were back in the day.
If I need to bet on two companies that will win the AI race in the west, it's Anthropic and Google. Google on the consumer side mostly and Anthropic in enterprise. OpenAI will probably IPO soon to shift the risk to the public.
chasd00|9 days ago
Edit: one thing I didn’t think about is Anthropic more or less runs at the pleasure of AWS. Of Amazon sees Anthropic as a threat to AWS then it could be lights out.
AznHisoka|9 days ago
SirensOfTitan|9 days ago
Enterprise switching costs aren’t 0, but they’re much less than most other categories, especially as models mature and become more fungible.
The best moat I can think of is a patentable technique that facilitates a huge leap that Anthropic can defend, but even then, Chinese companies could easily ignore those patents. And I don’t even know if AI companies could stick to those guns as their training is essentially theft of huge portions of copyrighted material.
wejwej|9 days ago
idopmstuff|9 days ago
On the user side, memory and context, especially as continual learning is developed, is pretty valuable. I use Claude Code to help run a lot of parts of my business, and it has so much context about what I do and the different products I sell that it would be annoying to switch at this point. I just used it to help me close my books for the year, and the fact that it was looking at my QuickBooks transactions with an understanding of my business definitely saved me a lot of time explaining.
On the enterprise side, I think businesses are going to be hesitant to swap models in and out, especially when they're used for core product functionality. It's annoying to change deterministic software, and switching probabilistic models seems much more fraught.
ahussain|9 days ago
LLMs are useful and these companies will continue to find ways to capture some of the value they are creating.
ulfbert_inc|9 days ago
I am yet to see in-depth analysis that supports this claim
otabdeveloper4|9 days ago
lvl155|9 days ago
delaminator|9 days ago
techpression|9 days ago
whynotmaybe|9 days ago
At first the answer was "I can't say anything that might hurt people" but with a little persuasion it went further.
The answer wasn't the current official answer but way more nuanced that Wikipedia's article. More in the vein of "we don't know for sure", "different versions", "external propaganda", "some officials have lied and been arrested since"
In the end, when I asked whether I should trust the government or ask for multiple source, it strongly suggested to use multiple sources to form an opinion.
= not as censored as I expected.
jpalomaki|9 days ago
KoolKat23|9 days ago
OpenAI I'm sorry to say are all over the place. They're good at what they do, but they try to do too much and need near ponzi style growth to sustain their business model.
nvarsj|9 days ago
Anthropic has actually cracked Agentic AI that is generally useful. No other company has done that.
deepriverfish|9 days ago
unknown|9 days ago
[deleted]
unknown|9 days ago
[deleted]
999900000999|9 days ago
Enterprise customers will gladly pay 10x to 20x for American models. Of course this means American tech companies will start to fall behind, combined with our recent Xenophobia.
Almost all the top AI researchers are either Chinese nationals or recent immigrants. With the way we've been treating immigrants lately ( plenty of people with status have been detained, often for weeks), I can't imagine the world's best talent continuing to come here.
It's going to be an interesting decade y'all.