Ask HN: Better hardware means OpenAI, Anthropic, etc. are doomed in the future?
5 points| kart23 | 17 days ago
do you think what this is possible, and what are these companies plans in that event?
5 points| kart23 | 17 days ago
do you think what this is possible, and what are these companies plans in that event?
raw_anon_1111|14 days ago
On the business side, to a first approximation, no one is running their own computers in their office. They are using either a Colo or cloud service.
Speaking of which, AI that they access through an API key is not a product that most people buy. They buy products that use AI - like ChatGPT. Speaking of which, open AI has no illusions about becoming profitable based on $20/month subscriptions or even lesser so advertising.
The money that AI companies make from selling API access directly (except maybe Anthropic via Claude Code) pales in comparison to what they make selling through cloud providers who then sell to businesses.
> I could see people buying a desktop they keep at home, and having all their personal inference running on that one machine. Or even having inference pools to distribute load among many people
Yes I’m sure my 80 year old mother who uses ChatGPT is going to get together with her sisters and buy computers that they can network together over their 30 Mbps uplink cable modem…
This is so much not how normal people operate.
atleastoptimal|15 days ago
Your premise makes sense if the benefits of an AI model topped out at something that a person's personal computer could run. However scaling laws seem to have no limit yet (perhaps due to the general nature of intelligence itself not having a "limit"), thus the labs will still have a significant advantage due to scale and hosting models with a distinct comparative advantage to even the best local models.
farseer|17 days ago
kart23|17 days ago
fogzen|17 days ago
Maybe it takes a bit longer than 5 years but that's where we're going. Already the only reason you're not interacting via personal assistant for everything isn't really LLM capability but the lack of tooling.
ediblelegible|15 days ago
The are already moving into enterprise, Gov, and product software. I think they’ll find a way to make money even if access to their models is no longer a huge moat
verdverm|17 days ago
2. The best models are still worth it, unclear when this changes
3. Average person doesn't have the skill to do this. They are afraid to run even simpler things
kart23|17 days ago
3. this is like saying the average person doesn’t have the skill to run gta over wine on their linux box. gaming consoles exist.
throwaway5465|17 days ago
Local sovereignty isn't a pressing need for most users.
freakynit|17 days ago
The moats these companies might end up having in near future:
1. Government and enterprise contracts;
2. Even better private models not released to public and only accessible through long-term/exclusive contracts;
3. Gatekeeping the access to millions of their users, especially the non-technical ones, and charging premium for the same;
4. Becoming more and more as the full-stack OS'es to build on top of them.. By proving ready-made foundational layers like knowledge, memory, search/research, sandboxes, deployments, etc...
5. Data/network effects from large-scale usage and feedback loops.
...