top | item 46851465

(no title)

spiderice | 27 days ago

> what if the current prices really are unsustainable and the thing goes 10x?

Where does this idea come from? We know how much it costs to run LLMs. It's not like we're waiting to find out. AI companies aren't losing money on API tokens. What could possibly happen to make prices go 10x when they're already running at a profit? Claude Max might be a different story, but AI is going to get cheaper to run. Not randomly 10x for the same models.

discuss

order

overgard|27 days ago

From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference. It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated), but if the screws start being turned on investment dollars they not only have to increase the price of their current offerings (2x cost wouldn't shock me), but some of them also need a (massive) influx of capital to handle things like datacenter build obligations (10s of billions of dollars). So I don't think it's crazy to think that prices might go up quite a bit. We've already seen waves of it, like last summer when Cursor suddenly became a lot more expensive (or less functional, depending on your perspective)

sothatsit|27 days ago

Dario Amodei has said that their models actually have a good return, even when accounting for training costs [0]. They lose money because of R&D, training the next bigger models, and I assume also investment in other areas like data centers.

Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply. All of this makes me believe them when they say they are profitable on API usage. Usage on the plans is a bit more unknown.

[0] https://youtu.be/GcqQ1ebBqkc?si=Vs2R4taIhj3uwIyj&t=1088

hyperadvanced|27 days ago

This is my understanding as well. If GPT made money the companies that run them would be publicly traded?

Furthermore, companies which are publicly traded show that overall the products are not economical. Meta and MSFT are great examples of this, though they have recently seen opposite sides of investors appraising their results. Notably, OpenAI and MSFT are more closely linked than any other Mag7 companies with an AI startup.

https://www.forbes.com/sites/phoebeliu/2025/11/10/openai-spe...

raincole|27 days ago

> From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference.

> It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated)

Yeah, exactly. So how the hell the bloggers you read know AI players are losing money? Are they whistleblowers? Or they're pulling numbers out of their asses? Your choice.

lemming|27 days ago

Sam Altman is on record saying that OpenAI is profitable on inference. He might be lying, but it seems an unlikely thing to lie about.

up-n-atom|27 days ago

Where did u get this notion from? you must not be old enough to know how subscription services play out. Ask your parents about their internet or mobile billings. Or the very least check Azures, AWS, Netflix historical pricing.

Heck we were spoiled by “memory is cheap” but here we are today wasting it at every expense as prices keep skyrocketing (ps they ain’t coming back down). If you can’t see the shift to forceful subscriptions via technologies guised as “security” ie. secure boot and the monopolistic distribution (Apple, Google, Amazon) or the OEM, you’re running with blinders. Computings future as it’s heading will be closed ecosystems that are subscription serviced, mobile only. They’ll nickel and dime users for every nuanced freedom of expression they can.

Is it crazy to correlate the price of memory to our ability to localize LLM?

raincole|27 days ago

> Ask your parents about their internet or mobile billings. Or the very least check Azures, AWS, Netflix historical pricing.

None of these went 10x. Actually the internet went 0.0001~0.001x for me in terms of bits/money. I lived through dial-up era.