top | item 47136849

LLMs feel more like CPUs than applications

1 points| derverstand | 7 days ago

I’ve been thinking about the current LLM wave and the historical microchip transition.

Microchips were never “products” in themselves. They were compute primitives. The real value emerged in operating systems, developer tools, and applications built on top.

Today, LLMs increasingly feel similar. OpenAI, Anthropic, Google, etc. are building cognitive compute layers. Most “AI startups” look like early PC software – wrappers around a new primitive.

Agents then feel like early operating systems: orchestration layers around probabilistic compute, adding memory, tool access, execution loops.

If this analogy holds, the long-term value might not sit in the base models themselves, but in: - workflow integration - vertical domain systems - data pipelines - distribution - orchestration layers

The open question: Is this closer to the Intel/AMD era (infrastructure shift), or something fundamentally different because the primitive is stochastic and language-native?

Curious how others see the analogy.

2 comments

order

0xecro1|7 days ago

It's fundamentally different, and that's exactly why the CPU analogy has an expiration date. Intel never built Windows. The chip layer and the application layer were structurally separated — different companies, different business models, different competencies. That separation is what created the massive ecosystem on top. Frontier labs aren't staying in the chip layer. Anthropic ships the model, the API, artifacts, computer use, MCP — primitive through orchestration through distribution. OpenAI has the model, ChatGPT, plugins, the app store play. They're vertically integrating the entire stack at a speed Intel never could, precisely because the primitive is probabilistic and language-native. When your compute layer already "understands" the task, the gap between infrastructure and application collapses. So short-term, yes, it looks like Intel/AMD. Base models commoditize, value flows to tooling and verticals. But long-term, the labs that own the primitive will likely own most of the stack above it too. The "just build on top" window might be shorter than people expect.

derverstand|7 days ago

Fair point on the vertical integration. That’s definitely a real difference from the CPU era.

At the same time, I’m not fully convinced that owning the primitive automatically means owning most of the value on top of it in the long run.

Nvidia owns the GPU layer, but it doesn’t own the majority of the software built on top of GPUs. AWS owns infrastructure, but SaaS value still fragments heavily across vertical domains. Infrastructure providers often try to move up the stack, but specialization and domain depth tend to create space above them.

It’s possible that frontier labs will capture more horizontal value than chip companies ever did. But I’m not sure language-native compute completely collapses the stack. It might just reduce friction and lower the barrier for building vertical systems.

The interesting question to me is whether this really eliminates the ecosystem layer — or just reshapes it.