top | item 42847825

Why OpenAI's $157B valuation misreads AI's future (Oct 2024)

139 points| pcurve | 1 year ago |foundationcapital.com | reply

132 comments

order
[+] openrisk|1 year ago|reply
"Linux ultimately prevailed-not because it was better from the start, but because it allowed developers to modify the code freely, run it more securely and affordably, and build a broader ecosystem that enabled more capabilities than any closed system"

Deepseek followed llama and will be followed by others in the usual mushroom fashion of open source. People really dont appreciate the magnitude of the disruptive force that is unleashed by the open source paradigm. In a year from now the landscape will be brimming with new initiatives. In a few years nobody will even remember "open"ai.

Conventional economic theory will always misread the future of computing (and thus "AI"). The zero marginal cost and infinite replicability is not a bug, its a feature. But so far we dont really have a good model how to think about it and merge it with mainstream business models. Something must pay the bills eventually but these are very different bills from those of conventional scarcity based businesses. Ironically in the end the main scarcity is human ingenuity. Read the interview of the Deepseek founder on why their models are open source.

[+] hx8|1 year ago|reply
With current generation AI we really need AI + Humans to have good results. It seems likely that the entire LLM branch of products will have this limitation. If that's the case let's race to make the best open source AI as fast as possible, so it can be spread as wide as possible and can be used to fix our shared problems. It's the spreading widely that will lead to breakout results in fields such a cancer research just as much as having the most intelligent system, because humans bring some X factor for creativity/ingenuity/novelty.
[+] mbowcut2|1 year ago|reply
DeepSeek has demonstrated that there is no technical moat. Model training costs are plummeting, and the margins for APIs will just get slimmer. Plus model capabilities are plateauing. Once model improvement slows down enough, seems to me like the battle is to be fought in the application layer. Whoever can make the killer app will capture the market.
[+] mirzap|1 year ago|reply
Model capabilities are not plateauing; in fact, they are improving exponentially. I believe people struggle to grasp how AI works and how it differs from other technologies we invented. Our brains tend to think linearly; that's why we see AI as an "app." With AI (ASI), everything accelerates. There will be no concept of an "app" in ASI world.
[+] keithwhor|1 year ago|reply
Something worth noting is that ChatGPT currently is the killer app -- DeepSeek's current chart-topping app notwithstanding (not clear if viral blip or long-term trend).
[+] abathur|1 year ago|reply
Can you unpack why you think there'll be defensible moats at the application layer?

(I thought you had this exactly right when I read it, but I kept noodling it while I brushed my teeth and now I'm not so sure llms won't just prove hard to build durable margins at meaningful volume on?)

[+] nextworddev|1 year ago|reply
Just playing devil's advocate:

VCs (esp those who missed out on OAI) are heavily incentivized to root for OAI to fail, and commoditize the biggest COGs item (AI models).

This guy is just talking his book.

[+] swyx|1 year ago|reply
wise man once said: "the devil doesnt need advocates, hell is full of them".

you can both talk your book and also sincerely believe what you say. ad hominem (or whatever the latin equivalent is of ad bookinem) is not as substantive a criticism as you make it out to be. he can be both biased and correct.

[+] krainboltgreene|1 year ago|reply
That doesn't make any sense. A lack of investment is not an investment against by any means, unless this VC invested in the concept of less spam or more workers.
[+] dralley|1 year ago|reply
VCs with deep pocketbooks, their startups, and the hardware vendors they purchased from (not to mention politicians) are heavily incentivised to believe that their value-add can't be commoditized.

If your grand dream is to dominate the market through sheer massive scale and that's what you're selling to capital, you're not exactly looking for reasons to buy less hardware and your vendor is hardly going to talk you out of it.

"It's hard to get a man to understand something when his fat valuation depends on his not understanding it"

[+] Stokley|1 year ago|reply
Whether you love or hate OpenAI, the CapEx involved with this company will be viewed as historic in the future, and will change (has already changed) the paradigm of how tech startups/projects are funded
[+] manquer|1 year ago|reply
I think it will have an adverse affect to funding ecosystem.

The inevitable haircut all the funds are going to take in OpenAI and other AI startups when revenue fails to materialize[1] will herald a bust cycle and lot more circumspection in large investments like it happened few years back when a large number of Soft Bank investments did not pan out, notably most of them relied on big funding rounds to muscle out other players, not all of them have failed but all of them lost of enterprise value for investors.

---

[1] This is inevitable regardless of success of the space, either because cost of inference keeps dropping in combination with competing with high quality open weight models as DeepSeek, Stable Diffusion and others have shown. It will have strong downward pressure on pricing impacting both revenue and profit.

[+] piva00|1 year ago|reply
It changed the paradigm negatively so far, it vacuumed so much money that anything non-AI is not getting much funding, it locked CapEx and now it's looking like there won't be a massive RoI for the capital spent.

The AI hype might have as well setback a lot of other products that never got the necessary funding to get up from the ground, it's not looking pretty.

[+] elijahbenizzy|1 year ago|reply
We’ve just learned that it’s possible to do AI on less compute (deepseek). if OpenAI doesn’t scale and that’s the problem then I’d argue that in the long run, if you believe in their ability to do research, then the news this week is a very bullish sign.

IMO the equivalent of moores law for AI (both on software and hardware development) is baked into the price, which doesn’t make the valuation all too crazy.

[+] refulgentis|1 year ago|reply
> We’ve just learned that it’s possible to do AI on less compute (deepseek).

There's a huge motte and bailey thing with DeepSeek conversation, where the bailey is "It only took $5.5 million!*" (* for exactly one training run for one of several models, at dirt-cheap per-hour spot prices for H100s) and the motte is all sorts of stuff.

Truth is one run for one model took 2048 GPUs fulltime for 2 months, and my experience with FAANG ML, that means it took 6 months part-time and another 1.5-2.5 runs went absolutely nowhere.

[+] _heimdall|1 year ago|reply
> is baked into the price, which doesn’t make the valuation all too crazy.

Valuations for most large companies have been crazy for a while now. No one values a company based on fundamentals anymore, its all pure gambling on future predictions.

This isn't unique to OpenAI by any means, but they are a good example. Last I checked their revenue to valuation multiplier was in the range of 42X. That's crazy.

[+] Animats|1 year ago|reply
Does anyone know how Deepseek does it yet?
[+] SlightlyLeftPad|1 year ago|reply
Honestly, I’m not sure I’m completely sold on the value of LLMs long term but this is the most realistic and reasonable take I’ve read on this post so far.

If anything, it’s an downward adjustment in the cost implications but could actually unlock exponential improvements on a shorter time horizon than expected because of that. Investors getting scared probably is a good opportunity to buy in.

[+] FeepingCreature|1 year ago|reply
It's always been possible to "do (worse) AI on less compute". We've had years of open models! I also don't understand how anyone can see this as anything but good news for OpenAI. The ultimate value proposition of AI has always depended on whether it stretches to AGI and beyond, and R1 demonstrates that there's several orders of magnitude of hardware overhang. This makes it easier for OpenAI to succeed, not harder, because it makes it less likely that they'll scale to their financial limits and still fail to surpass humans.
[+] trhway|1 year ago|reply
> But while Facebook’s costs decreased as it scaled, OpenAI’s costs are growing in lockstep with its revenue, and sometimes faster

And here comes DeepSeek and takes the steam out of this and the cost arguments that follow it.

[+] dralley|1 year ago|reply
It's lose-lose for their valuation regardless. This scenario might if anything be worse for them. Now they have massive sunk capital investments that the second mover might be able to avoid. If the open source models get small enough and high enough quality, the rationale for runing them in the cloud in the first place start to evaporate.

How does OpenAI get paid for a use case that can easily be run locally on an iPhone?

[+] PKop|1 year ago|reply
Isn't that argument that after all the spending by OpenAI, an upstart derived from their investment a cheaper but similarly capable alternative? Where's the profitability in the equation coming from?

Now there's another supplier to match the (potential?) consumer or corporate demand that's diffused among more competitors, and open source.

[+] croes|1 year ago|reply
Until now we know only what the they claim what that costs are.
[+] blackeyeblitzar|1 year ago|reply
People are announcing the death of foundational models too early. Don’t people realize that the big AI players will take all of the proprietary things they’ve been building up behind closed doors and simply layer onto them all the winning techniques everyone else is publishing (like what DeepSeek has used)? DeepSeek itself is taking ideas that have proven out in various other papers and stacking them up to produce their gains (which they’ve been transparent about in their papers).

I also still don’t believe their cost figures, and think they’re leaving out the capital to acquire their secret GPU stash and the cost of pre training their base model (DeepSeek-V3-base). I also suspect their training corpus, which they’ve only vaguely described, would reveal the savings came from working off other foundational models’ work without counting those costs in their figure.

For now, I treat the cost claim as simply a calculated strategy for China to not look like they’re behind in the most important race, to prevent investors from continuing to boost US technology by causing them to doubt the ROI, and to take value out of the US stock market as they did today.

[+] tempeler|1 year ago|reply
Pricing is the betting or wishing in the valuation. The buyer thinks it will increase; the seller thinks it's enough. No one knows what will happen in the future. Maybe the Fed will print too much money. Does anyone know what will happen in the future?
[+] stego-tech|1 year ago|reply
A pretty good read that succinctly picks apart the realities of current AI businesses. Easily something I’d reference as a “primer” to someone that is more business-minded than technically-minded.

One point I’ll agree on is his final one: that the true big players haven’t even been founded yet. Right now, the AI hype seems to still revolve around the dream of replacing humans with machines and still magically making Capitalism work in the process, which is something I (and other “contrarians”) have beaten to death in other threads. That said, what these companies have managed to demonstrate is that transformer-based predictive models are a part of the future - just not AGI.

If I were a VC, I’d be looking at startups that take the same training techniques but apply them in niche fields with higher success rates than general models. An example might be a firm that puts in the grunt work of training a foundational model in a specific realm of medicine, and then makes it easier for a hospital network to run said model locally against patient data while also continuously training and fine-tuning the underlying model. I wouldn’t want to get into the muck of SaaS in these cases, because data sovereignty is only going to become an ever-thornier issue in the coming decades, and these prediction models can leak user data like a sieve if not implemented correctly. Same goes for other narrow applications, like single-mode logistics networks or on-site hospitality interfaces. The real money will be in the ability to run foundational models against your own data in privacy and security, with inference at the edge or on-device rather than off in a hyperscaler datacenter somewhere.

Then again, I could be totally wrong. Guess we’ll all find out together.

[+] RugnirViking|1 year ago|reply
I believe one of the real insights of the widespread adoption of LLMs across problem domains is that the general knowledge insight of such models actually maps to increased performance on specific domain tasks. Hence finetuning is a better approach than training from scratch, unless you have insane compute (at which point, why restrict yourself to a narrow domain?)
[+] broken_clock|1 year ago|reply
Aren't there already a ton of startups doing finetunes for their local niche? Many aren't even "AI" companies - it's pretty easy to slap a finetune together if you enough data.

If you mean developing a model from scratch just for your niche - the bitter lesson is that scale is everything and that a finetune from an internet-scale model will outperform you easily.

[+] dralley|1 year ago|reply
A year ago Sam Altman was going around trying to convince people we all needed to drop 7 trillion dollars to build hundreds of fabs and nuclear power plants to fuel his AI ambitions. Only a week ago he was triumphantly announcing 500 billion dollar deals with our new President.

The (regrettably temporary) ousting of Sam Altman looks like the right call, in hindsight. Of course some amount of showmanship is expected, but the extreme nature of this self-serving BS is just laughable.

6 months from now we may be looking at Sam Altman the way we look at Adam Neumann.

[+] imtringued|1 year ago|reply
The bitter lesson was never about hardware scaling being the only ingredient. It was about favouring generalist approaches that can scale with your hardware budget over carefully handcrafted bespoke solutions that conserve the "cheap" resource.

I'm not seeing anyone at OpenAI abandon the static weights model and yet they have audacity to claim that they just need to scale more?

[+] deegles|1 year ago|reply
Or Elizabeth Holmes....
[+] romanovcode|1 year ago|reply
The timing with the 500b deal is just perfect
[+] a13n|1 year ago|reply
> I’d argue that the most valuable companies of the AI era don’t exist yet. They’ll be the startups that harness AI’s potential to solve specific, costly problems across our economy—from engineering and finance to healthcare, logistics, legal, marketing, sales, and more.

I feel like the author's concluding point contradicts himself. There is a gold rush and OpenAI is selling shovels.

[+] alephnerd|1 year ago|reply
Glad to finally see Ashu Garg's writings on HN.
[+] halfcat|1 year ago|reply
It’s the year 2000. We have the internet, a technology that will change the world. Yahoo is the most valuable company on earth. Among the coolest things people do is go to CompUSA and pay money for a web browser, Netscape Navigator, because it supports the <blink> tag so you can make your geocities page even more awesome. Google is still operating out of a garage somewhere, and won’t be household name until after the bubble bursts.

That’s where we are in the AI journey in 2025. The year 2000.

[+] tonyhart7|1 year ago|reply
and microsoft literally spent 80 billions on top of its, like bro imagine 80 billions dollar company is like top 0,01 percent

and that valuation would crumble because of deepseek

[+] poorcedural|1 year ago|reply
Why do we not value QBASIC in billions? Honestly, we value current Van Gogh paintings in billions. The past cost us more, we got here because of that art fought through decades of litigation. Does progress mean we forget all of that and hope on a promise of easy answers?
[+] danaris|1 year ago|reply
Scarcity.

Van Goghs have it; QBASIC doesn't. Anyone can download QBASIC for free.

[+] refulgentis|1 year ago|reply
For about a month now I've been paying $20-$30/day to delegate the bulk of my coding to Sonnet. The agentic loop thats trained into it is just simply not matched by another other model.

I can't admit to myself there's any open question as to if there is any long-term value.

I expect within 2 years, this will seem like a non-controversial idea, and it won't bring in a ton of assumptions about the speaker.

I have invested much time and effort making sure local models are a peer to remote ones in my app, and none, including DeepSeek's local models, are remotely close to the things needed to make that flow work.

EDIT: Reply-throttled, so answering replies here:

- The machine is building the machine: Telosnex, a cross-platform Flutter app

- it can do 90% of the scope, especially after I wrote precanned instructions for doing e.g. property-based testing.

- Things it's done mostly wholesale: -- secure iframe environment, on all 6 platforms, to: execute JS in, or render react components it wrote. -- completely refactoring my llama.cpp inference to use non-deprecated APIs.

- Codebase is about 40K real lines of code. (I have to think this helps a lot I doubt that ex. from scratch it would be able to build a Flutter app that used llama.cpp.)

- $30/day!?! -- Yeah, it's crazy, its up an order of magnitude from my most busy days when I just copy-pasted back and forth. It reads as much code as it wants, and you're doing more work literally, so it adds up.

- $20/day is realistic average

- Lines added per day +55%, lines deleted per day +29%, files changed per day 9 -> 21 https://x.com/jpohhhh/status/1881453489852948561

[+] Jasondells|1 year ago|reply
The OpenAI vs. DeepSeek debate is fascinating... but I think people are oversimplifying both the challenges and the opportunities here.

First, OpenAI’s valuation is a bit wild—$157B on 13.5x forward revenue? That’s Meta/Facebook-level multiples at IPO, and OpenAI’s economics don’t scale the same way. Generative AI costs grow with usage, and compute isn’t getting cheaper fast enough to balance that out. Throw in the $6B+ infrastructure spend for 2025, and yeah, there’s a lot of financial risk. But that said... their growth is still insane. $300M monthly revenue by late 2023? That’s the kind of user adoption that others dream about, even if the profits aren’t there yet.

Now, the “no moat” argument... sure, DeepSeek showed us what’s possible on a budget, but let’s not pretend OpenAI is standing still. These open-source innovations (DeepSeek included) still build on years of foundational work by OpenAI, Google, and Meta. And while open models are narrowing the gap, it’s the ecosystem that wins long-term. Think Linux vs. proprietary Unix. OpenAI is like Microsoft here—if they play it right, they don’t need to have the best models; they need to be the default toolset for businesses and developers. (Also, let’s not forget how hard it is to maintain consistency and reliability at OpenAI’s scale—DeepSeek isn’t running 10M paying users yet.)

That said... I get the doubts. If your competitors can offer “good enough” models for free or dirt cheap, how do you justify charging $44/month (or whatever)? The killer app for AI might not even look like ChatGPT—Cursor, for example, has been far more useful for me at work. OpenAI needs to think beyond just being a platform or consumer product and figure out how to integrate AI into industry workflows in a way that really adds value. Otherwise, someone else will take that pie.

One thing OpenAI could do better? Focus on edge AI or lightweight models. DeepSeek already showed us that efficient, local models can challenge the hyperscaler approach. Why not explore something like “ChatGPT Lite” for mobile devices or edge environments? This could open new markets, especially in areas where high latency or data privacy is a concern.

Finally... the open-source thing. OpenAI’s “open” branding feels increasingly ironic, and it’s creating a trust gap. What if they flipped the script and started contributing more to the open-source ecosystem? It might look counterintuitive, but being seen as a collaborator could soften some of the backlash and even boost adoption indirectly.

OpenAI is still the frontrunner, but the path ahead isn’t clear-cut. They need to address their cost structure, competition from open models, and what comes after ChatGPT. If they don’t adapt quickly, they risk becoming Yahoo in a Google world. But if they pivot smartly—edge AI, better B2B integrations, maybe even some open-source goodwill—they still have the potential to lead this space.

[+] coldpepper|1 year ago|reply
AI is still a fad.
[+] blackeyeblitzar|1 year ago|reply
How so? There are so many tangible applications already: reduced customers service costs, legal research, analysis of medical records or imaging, self-driving Waymos, and so on. Just the things I listed will have profound impacts on cost savings, productivity, and quality of life.
[+] curvaturearth|1 year ago|reply
It certainly lets untrained people create stuff they probably should not
[+] baq|1 year ago|reply
Only if you’re moving the goal posts every other week.
[+] breakitmakeit|1 year ago|reply
I wrote a tested prototype MMO in the last few weekends with AI as my power tools.

You're holding it wrong.