top | item 46079018

(no title)

Deegy | 3 months ago

They know that LLMs as a product are racing towards commoditization. Bye bye profit margins. The only way to win is regulation allowing a few approved providers.

discuss

order

fuzzy_biscuit|3 months ago

They are more likely trying to race towards wildly overinflated government contracts because they aren't going to profit how they're currently operating without some of that funny money.

baxtr|3 months ago

Isn’t that a bit like saying: storage is commodity and thus profit margins will be/should be low.

All major cloud providers have high profit margins in the range of 30-40%.

adam_arthur|3 months ago

Storage doesn't require the same capex/upfront investment to get that margin.

How much does it cost to train a cutting edge LLM? Those costs need to be factored into the margin from inferencing.

Buying hard drives and slotting them in also has capex associated with it, but far less in total, I'd guess.

az09mugen|2 months ago

RAM also apparently.

kupopuffs|3 months ago

this is slightly more nuanced, since the AI portion is not making money. it's their side hustle

delusional|3 months ago

What profit margins?

Deegy|3 months ago

It is unclear. Everyday I seem to read contradictory headlines about whether or not inference is profitable.

If inference has significant profitability and you're the only game in town, you could do really well.

But without regulation, as a commodity, the margin on inference approaches zero.

None of this even speaks to recouping the R&D costs it takes to stay competitive. If they're not able to pull up the ladder, these frontier model companies could have a really bad time.

kibwen|3 months ago

It's still technically a profit margin if it's less than zero...

bko|3 months ago

There are profit margins on inference from what I understand. However the hefty training costs obviously make it a money losing operation.

SoftTalker|3 months ago

The ones they hoped for.

vpShane|3 months ago

Yeah, but we can self-host them. At this point in the span of it, it's more about infrastructure and compute power to meet demand and Google won because it has many business models, massive cashflow, TPUs, and the infrastructure to build expanding on their current, which would take new companies ~25 years to map out compute, data centers and have a viable, tangible infrastructure all while trying to figure out profits.

I'm not sure about how the regulation of things would work, but prompt injections and whatever other attacks we haven't seen yet where agents can be hijacked and made to do things sounds pretty scary.

It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO

Arainach|3 months ago

>Yeah, but we can self-host them

Who is "we", and what are the actual capabilities of the self-hosted models? Do they do the things that people want/are willing to pay money for? Can they integrate with my documents in O365/Google Drive or my calendar/email in hosted platforms? Can most users without a CS degree and a decade of Linux experience actually get them installed or interact with them? Are they integratable with the tools they use?

Statistically close to "everyone" cannot run great models locally. GPUs are expensive and niche, especially with large amounts of VRAM.

wyre|3 months ago

>It's a race towards AGI at this point. Not sure if that can be achieved as language != consciousness IMO

However it is arguable that thought is relatable with conscienceness. I’m aware non-linguistic thought exists and is vital to any definition of conscienceness, but LLMs technically dont think in words, they think in tokens, so I could imagine this getting closer.

threethirtytwo|3 months ago

The bottleneck for commoditization is hardware. The manufacture of the hardware required is led by tmsc and samsung being a close second. The tooling required for manufacture is centralized with ASML and several other smaller players like Zeiss and the design of the product centers around nvidia though there are players like AMD who are attempting to catch up.

It is a complex supply chain but each section of the chain is held by only a few companies. Hopefully this is enough competition to accelerate the development of computational technologies that can run and train these LLMs at home. I give it a decade or more.

nradov|3 months ago

Another way to win is through exclusive access to high quality training data. Training data quality and quantity represent an upper bound on LLM performance. That's why the frontier model developers are investing some of their "war chests" in purchasing exclusive rights to data locked up behind corporate firewalls, and even hiring human subject matter experts in order to create custom proprietary training data in certain strategic domains.

b0Ring|3 months ago

[deleted]

missedthecue|3 months ago

The "few approved providers" model is what they have been fighting against since the Biden admin

flir|3 months ago

The only way to win is commoditize your complement (IMO).

pclmulqdq|3 months ago

That's a good line but it only works if market forces don't commoditize you first. Blithely saying "commoditize your complement" is a bit like saying "draw the rest of the owl."