top | item 40082039

(no title)

LrnByTeach | 1 year ago

Losers & Winners from Llama-3-400B Matching 'Claude 3 Opus' etc..

Losers:

- Nvidia Stock : lid on GPU growth in the coming year or two as "Nation states" use Llama-3/Llama-4 instead spending $$$ on GPU for own models, same goes with big corporations.

- OpenAI & Sam: hard to raise speculated $100 Billion, Given GPT-4/GPT-5 advances are visible now.

- Google : diminished AI superiority posture

Winners:

- AMD, intel: these companies can focus on Chips for AI Inference instead of falling behind Nvidia Training Superior GPUs

- Universities & rest of the world : can work on top of Llama-3

discuss

order

vineyardmike|1 year ago

I also disagree on Google...

Google's business is largely not predicated on AI the way everyone else is. Sure they hope it's a driver of growth, but if the entire LLM industry disappeared, they'd be fine. Google doesn't need AI "Superiority", they need "good enough" to prevent the masses from product switching.

If the entire world is saturated in AI, then it no longer becomes a differentiator to drive switching. And maybe the arms race will die down, and they can save on costs trying to out-gun everyone else.

cm2012|1 year ago

AI is taking marketshare from search slowly. More and more people will go to the AI to find things and not a search bar. It will be a crisis for Google in 5-10 years.

season2episode3|1 year ago

Google’s play is not really in AI imo, it’s in the the fact that their custom silicon allows them to run models cheaply.

Models are pretty much fungible at this point if you’re not trying to do any LoRAs or fine tunes.

gliched_robot|1 year ago

Disagree on Nvidia, most folks fine-tune model. Proof: there are about 20k models in huggingface derived from llama 2, all of them trained on Nvidia GPUs.

eggdaft|1 year ago

Fine tuning can take a fraction of the resources required for training, so I think the original point stands.

drcode|1 year ago

The memory chip companies were done for, once Bill Gates figured out no one would ever need more than 64K of memory

adventured|1 year ago

Misattributed to Bill Gates, he never said it.

phkahler|1 year ago

Right. We all need 192 or 256GB to locally run these ~70B models, and 1TB to run a 400B.

Rastonbury|1 year ago

If anything a capable open source model is good for Nvidia, not commenting on their share price but business of course.

Better open models lower the barrier to build products and drive the price down, more options at cheaper prices which means bigger demand for GPUs and Cloud. More of what the end customers pay for goes to inference and not IP/training of proprietary models

edward28|1 year ago

Pretty sure meta still uses NVIDIA for training.

whywhywhywhy|1 year ago

>AMD, intel: these companies can focus on Chips for AI Inference

No real evidence either can pull that off in any meaningful timeline, look how badly they neglected this type of computing the past 15 years.

oelang|1 year ago

AMD is already competitive on inference