top | item 44674048

(no title)

WhatsName | 7 months ago

If it were any good I would assume there would be no need to hype it up.

My theory is that LLMs will get commoditized within the next year. The edge that OpenAI had over the competition is arguably lost. If the trend continues we will be looking at inference like commodity prices, where the most efficient like cerebras and groq will be the only ones actually making money at the end.

discuss

order

moojacob|7 months ago

They already are. I have been using Kimi k2. It is 90% as good as Sonnet and on Groq 3x faster and 1/5th the price.

pizzalife|7 months ago

What kind of GPU setup are you using for Kimi?

csomar|7 months ago

That's an interesting name to choose. For second there, I thought GroK enabled third-party models.

OldfieldFund|7 months ago

isn't it Q4 quantized on groq?

dimitri-vs|7 months ago

I don't think so, look at how Sora changed every... Well Operator was a game changer for.. Hmm, but what about gpt-4.5 or PhD level o3... o3-pro...? I mean, the 10k/mon agents are definitely coming... any day now...

Anyway, I'm sure gpt-5 will be AGI.

empath75|7 months ago

> If it were any good I would assume there would be no need to hype it up.

Yes, this is why apple famously just dumped the original iphone on the market without telling anybody about it ahead of time.

SiempreViernes|7 months ago

With this comparison you are saying the original Iphone was like version 6 of an well established line of products in a market that had seen major releases a few times a year for about three years.

That's certainly not how the first iphone is usually described.

j_timberlake|7 months ago

"My theory is that LLMs will get commoditized within the next year."

Incredibly bad theory, it's like you're saying every LLM is the same because they can all talk, even though the newer ones continue to smash through benchmarks the older ones couldn't. And now it happens quarterly instead of yearly, so you can't even say it's slowing down.

infecto|7 months ago

At the moment most of the dollars are coming from consumer, inclusive of business, subscriptions. That’s where the valuations are getting pegged and most API dollars are probably seen as experimental. The model quality matters but product experience is what is driving revenue. In that sense OpenAI is doing quite well.

janalsncm|7 months ago

If that is the case, the $300 billion question is whether someone can create a product experience that is as good as OpenAI’s.

In my mind there are really three dimensions they can differentiate on: cost, speed, and quality. Cost is hard because they’re already losing money. Speed is hard because differentiation would require better hardware (more capex).

For many tasks, perhaps even a majority right now, quality of free models is approaching good enough.

OpenAI could create models which are unambiguously more reliable than the competition, or ones which are able to answer questions no other model can. Neither of those has happened yet afaik.

tootie|7 months ago

The fact that xAI only exists for Elon Musk's personal spite and they produced a top performing model certainly implies that model training isn't any kind of moat. It's certainly very expensive but not mysterious.

ml-anon|7 months ago

“Top performing model”

Ie overfit to benchmarks.

jryle70|7 months ago

Which year did Linux become the dominant desktop OS?