(no title)
colinnordin | 1 year ago
>Now, you still want to train the best model you can by cleverly leveraging as much compute as you can and as many trillion tokens of high quality training data as possible, but that's just the beginning of the story in this new world; now, you could easily use incredibly huge amounts of compute just to do inference from these models at a very high level of confidence or when trying to solve extremely tough problems that require "genius level" reasoning to avoid all the potential pitfalls that would lead a regular LLM astray.
I think this is the most interesting part. We always knew a huge fraction of the compute would be on inference rather than training, but it feels like the newest developments is pushing this even further towards inference.
Combine that with the fact that you can run the full R1 (680B) distributed on 3 consumer computers [1].
If most of NVIDIAs moat is in being able to efficiently interconnect thousands of GPUs, what happens when that is only important to a small fraction of the overall AI compute?
tomrod|1 year ago
Imagine having 300. Could you build even better models? Is DeepSeek the right team to deliver that, or can OpenAI, Meta, HF, etc. adapt?
Going to be an interesting few months on the market. I think OpenAI lost a LOT in the board fiasco. I am bullish on HF. I anticipate Meta will lose folks to brain drain in response to management equivocation around company values. I don't put much stock into Google or Microsoft's AI capabilities, they are the new IBMs and are no longer innovating except at obvious margins.
stormfather|1 year ago
onlyrealcuzzo|1 year ago
It seems like there is MUCH to gain by migrating to this approach - and it theoretically should not cost more to switch to that approach than vs the rewards to reap.
I expect all the major players are already working full-steam to incorporate this into their stacks as quickly as possible.
IMO, this seems incredibly bad to Nvidia, and incredibly good to everyone else.
I don't think this seems particularly bad for ChatGPT. They've built a strong brand. This should just help them reduce - by far - one of their largest expenses.
They'll have a slight disadvantage to say Google - who can much more easily switch from GPU to CPU. ChatGPT could have some growing pains there. Google would not.
danaris|1 year ago
I don't pretend to know much about the minutiae of LLM training, but it wouldn't surprise me at all if throwing massively more GPUs at this particular training paradigm only produces marginal increases in output quality.
simpaticoder|1 year ago
Would it not be useful to have multiple independent AIs observing and interacting to build a model of the world? I'm thinking something roughly like the "councelors" in the Civilization games, giving defense/economic/cultural advice, but generalized over any goal-oriented scenario (and including one to take the "user" role). A group of AIs with specific roles interacting with each other seems like a good area to explore, especially now given the downward scalability of LLMs.
tw1984|1 year ago
nah. it moat is CUDA and millions of devs using CUDA aka the ecosystem
mupuff1234|1 year ago
ReptileMan|1 year ago
So far it seems that the best investment is in RAM producers. Unlike compute the ram requirements seem to be stubborn.
a_wild_dandan|1 year ago
qingcharles|1 year ago
The higher performing chips, with one less interconnect, is going to give you significantly higher t/s.
arresin|1 year ago
qingcharles|1 year ago
I wonder how badly this quant affects the output on DeepSeek?
neuronic|1 year ago
Offtopic, but your comment finally pushed me over the edge to semantic satiation [1] regarding the word "moat". It is incredible how this word turned up a short while ago and now it seems to be a key ingredient of every second comment.
[1] https://en.wikipedia.org/wiki/Semantic_satiation
mikestew|1 year ago
I’m sure if I looked, I could find quotes from Warren Buffet (the recognized originator of the term) going back a few decades. But your point stands.
fastasucan|1 year ago
cwmoore|1 year ago
ljw1004|1 year ago