Less a technical comment and more just a mind-blown comment, but I still can’t get over just how much data is compressed into and available in these downloadable models. Yesterday I was on a plane with no WiFi, but had gemma3:12b downloaded through Ollama. Was playing around with it and showing my kids, and we fired history questions at it, questions about recent video games, and some animal fact questions. It wasn’t perfect, but holy cow the breadth of information that is embedded in an 8.1 GB file is incredible! Lossy, sure, but a pretty amazing way of compressing all of human knowledge into something incredibly contained.
It's extremely interesting how powerful a language model is at compression.
When you train it to be an assistant model, it's better at compressing assistant transcripts than it is general text.
There is an eval which I have a lot of interested in and respect for https://huggingface.co/spaces/Jellyfish042/UncheatableEval called UncheatableEval, which tests how good of a language model an LLM is by applying it on a range of compression tasks.
This task is essentially impossible to 'cheat'. Compression is a benchmark you cannot game!
Wikipedia is about 24GB, so if you're allowed to drop 1/3 of the details and make up the missing parts by splicing in random text, 8GB doesn't sound too bad.
To me the amazing thing is that you can tell the model to do something, even follow simple instructions in plain English, like make a list or write some python code to do $x, that's the really amazing part.
A neat project you (and others) might want to check out: https://kiwix.org/
Lots of various sources that you can download locally to have available offline. They're even providing some pre-loaded devices in areas where there may not be reliable or any internet access.
> The English Wikipedia, as of June 26, 2025, contains over 7 million articles and 63 million pages. The text content alone is approximately 156 GB, according to Wikipedia's statistics page. When including all revisions, the total size of the database is roughly 26 terabytes (26,455 GB)
I can imagine 100 bits sure. And 1,000 bits why not. 10,000 you lose me. A million? That sounds like a lot. Now 64 million would be a number I can't well imagine. And this is a thousand times 64 million!
the study of language models from an information theory/compression POV is a small field but increasingly impt for efficiency/scaling - we did a discussion about this today https://www.youtube.com/watch?v=SWIKyLSUBIc&t=2269s
The Encyclopædia Britannica has about 40,000,000 words [1] or about 0.25 GB if you assume 6 bytes per word. It’s impressive but not outlandish that an 8.1 GB file could encode a large swath of human information.
they're an upgraded version of self-executable zip files that compresses knowledge like mp3 compresses music, without knowing exactly wtf are either music nor knowledge
the self-execution is the interactive chat interface.
wikipedia gets "trained" (compiled+compressed+lossy) into an executable you can chat with, you can pass this through another pretrained A.I. than can talk out the text or transcribe it.
I think writing compilers is now an officially a defunct skill of historical and conservation purposes more than anything else; but I don't like saying "conservation", it's a bad framing, I rather say "legacy connectivity" which is a form of continuity or backwards compatibility
One factor, is the huge redundancies pervasive in our communication.
(1) There are so many ways to say the same thing, that (2) we have to add even more words to be precise at all. Without a verbal indexing system we (3) spend many words just setting up context for what we really want to say. And finally, (4) we pervasively add a great deal of intentionally non-informational creative and novel variability, and mood inducing color, which all require even more redundancy to maintain reliable interpretation, in order to induce our minds to maintain attention.
Our minds are active resistors of plain information!
All four factors add so much redundancy, it’s probably fair to say most of our communication (by bits, characters, words, etc., may be 95%?, 98%? or more!) pure redundancy.
Another helpful compressor, is many facts are among a few “reasonably expected” alternative answers. So it takes just a little biasing information to encode the right option.
Finally, the way we reason seems to be highly common across everything that matters to us. Even though we have yet to identify and characterize this informal human logic. So once that is modeled, that itself must compress a lot of relations significantly.
Fuzzy Logic was a first approximation attempt at modeling human “logic”. But has not been very successful.
Models should eventually help us uncover that “human logic”, by analyzing how they model it. Doing so may let us create even more efficient architectures. Perhaps significantly more efficient, and even provide more direct non-gradient/data based “thinking” design.
Nevertheless, the level of compression is astounding!
We are far less complicated cognitive machines that we imagine! Scary, but inspiring too.
I personally believe that common PCs of today, maybe even high end smart phones circa 2025, will be large enough to run future super intelligence when we get it right, given internet access to look up information.
I don't like the term "compression" used with transformers because it gives the wrong idea about how they function. Like that they are a search tool glued onto a .zip file, your prompts are just fancy search queries, and hallucinations are just bugs in the recall algo.
Although strictly speaking they have lots of information in a small package, they are F-tier compression algorithms because the loss is bad, unpredictable, and undetectable (i.e. a human has to check it). You would almost never use a transformer in place of any other compression algorithm for typical data compression uses.
All digitized books ever written/encoded compress to a few TB. The public web is ~50TB. I think a usable zip of all english electronic text publicly available would be on O(100TB). So we're at about 1% of that in model size, and we're in a diminishing-returns area of training -- ie., going to >1% has not yielded improvements (cf. gpt4.5 vs 4o).
This is why compute spend is moving to inference time with "reasoning" models. It's likely we're close to diminshing returns on inference-time compute now too, hence agents whereby (mostly,) deterministic tools are supplementing information /capability into the system.
I think to get any more value out of this model class, we'll be looking at domain-specific specialisation beyond instruction fine-tuning.
I'd guess targeting 1TB inference-time VRAM would be a reasonable medium-term target for high quality open source models -- that's within the reach of most SMEs today. That's about 250bn params.
> All digitized books ever written/encoded compress to a few TB. The public web is ~50TB. I think a usable zip of all english electronic text publicly available would be on O(100TB).
Where you getting these numbers from? Interested to see how that's calculated.
I read somewhere, but cannot find the source anymore, that all written text prior to this century was approx 50MB. (Might be misquoted as don't have source anymore).
This is kind of related to the jack morris post https://blog.jxmo.io/p/there-are-no-new-ideas-in-ai-only he discusses how the big leaps in LLMs have mostly come - not so much from new training methods or arch. changes as such - but the ability of new archs. to ingest more data.
This is somehow missing the Gemma and Gemini series of models from Google. I also think that not mentioning the T5 series of models is strange from a historical perspective because they sort of pioneered many of the concepts in transfer learning and kinda kicked off quite a bit of interest in this space.
The Gemma models are too small to be included in this list.
You're right the T5 stuff is very important historically but they're below 11B and I don't have much to say about them. Definitely a very interesting and important set of models though.
It’s ironic: for years the open-source community was trying to match GPT-3 (175B dense) with 30B–70B models + RLHF + synthetic data—and the performance gap persisted.
Turns out, size really did matter, at least at the base model level. Only with the release of truly massive dense (405B) or high-activation MoE models (DeepSeek V3, DBRX, etc) did we start seeing GPT-4-level reasoning emerge outside closed labs.
I think that one thing that this chart makes visually very clear is the point I about GPT-3 being such a huge leap, and there being a long gap before anybody was able to match it.
That said, there's an unstated assumption here that these truly large language models are the most interesting thing. The big players have been somewhat quiet but my impression from the outside is that OpenAI let a little bit leak with their behavior. They built an even larger model and it turned out to be disappointing so they quietly discontinued it. The most powerful frontier reasoning models may actually be smaller than the largest publicly available models.
How big are those in terms of size on disk and VRAM size?
Something like 1.61B just doesn't mean much to me since I don't know much about the guts of LLMs. But I'm curious about how that translates to computer hardware -- what specs would I need to run these? What could I run now, what would require spending some money, and what I might hope to be able to run in a decade?
At 1byte/param that's 1.6GB (f8), at 2 bytes (f16) that's 2.3GB -- but there's other space costs beyond loading the parameters for the GPU. So a rule of thumb is ~4x parameter count. So round up, 2B -> 2*4 = 8GB VRAM
Most of these models have been trained using 16-bit weights. So a 1 billion parameter model takes up 2 gigabytes.
In practice, models can be quantized to smaller weights for inference. Usually, the performance loss going from 16 bit weights to 8 bit weights is very minor, so a 1 billion parameter model can take 1 gigabyte. Thinking about these models in terms of 8-bit quantized weights has the added benefit of making the math really easy. A 20B model needs 20G of memory. Simple.
Of course, models can be quantized down even further, at greater cost of inference quality. Depending on what you're doing, 5-bit weights or even lower might be perfectly acceptable. There's some indication that models that have been trained on lower bit weights might perform better than larger models that have been quantized down. For example, a model that was trained using 4-bit weights might perform better than a model that was trained at 16 bits, then quantized down to 4 bits.
When running models, a lot of the performance bottleneck is memory bandwidth. This is why LLM enthusiasts are looking for GPUs with the most possible VRAM. You computer might have 128G of RAM, but your GPU's access to that memory is so constrained by bandwidth that you might as well run the model on your CPU. Running a model on the CPU can be done, it's just much slower because the computation is so parallel.
Today's higher end consumer grade GPUs have up to 24G of dedicated VRAM (an Nvidia RTX 5090 has 32G of VRAM and they're like $2k). The dedicated VRAM on a GPU has a memory bandwidth of about 1 Tb/s. Apple's M-series of ARM-based CPU's have 512 Gb/s of bandwidth, and they're one of the most popular ways of being able to run larger LLMs on consumer hardware. AMD's new "Strix Halo" CPU+GPU chips have up to 128G of unified memory, with a memory bandwidth of about 256 Gb/s.
Reddit's r/LocalLLaMA is a reasonable place to look to see what people are doing with consumer grade hardware. Of course, some of what they're doing is bonkers so don't take everything you see there as a guide.
And as far as a decade from now, who knows. Currently, the top silicon fabs of TSMC, Samsung, and Intel are all working flat-out to meet the GPU demand from hyperscalers rolling out capacity (Microsoft Azure, AWS, Google, etc). Silicon chip manufacturing has traditionally followed a boom/bust cycle. But with geopolitical tensions, global trade barriers, AI-driven advances, and whatever other black swan events, what the next few years will look like is anyone's guess.
As a rule of thumb, each billion parameters requires about 4GB of VRAM in FP16 (2 bytes per parameter), so a 7B model needs ~28GB, 70B needs ~280GB, while the 405B models need ~1.6TB of VRAM - though quantization can reduce this by 2-4x (4-bit models use only ~0.5GB per billion parameters).
I wish people would stop parroting the view that LLMs are lossy compression.
There is kind of a vague sense in which this metaphor holds, but there is a much more interesting and rigorous fact about LLMs which is that they are also _lossless_ compression algorithms.
There are at least two senses in which this is true:
1. You can use an LLM to losslessly compress any piece of text at a cost that approaches the log-likelihood of that text under the model, using arithmetic coding. A sender and receiver both need a copy of the LLM weights.
2. You can use an LLM plus SGD (I.e the training code) as an lossless compression algorithm, where the communication cost is area under the training curve (and the model weights don’t count towards description length!) see: Jack Rae “compression for AGI”
> There were projects to try to match it, but generally they operated by fine tuning things like small (70B) llama models on a bunch of GPT-3 generated texts (synthetic data - which can result in degeneration when AI outputs are fed back into AI training inputs).
That parenthetical doesn't quite work for me.
If synthetic data always degraded performance, AI labs wouldn't use synthetic data. They use it because it helps them train better models.
There's a paper that shows that if you very deliberately train a model in its own output in a loop you can get worse performance. That's not what AI labs using synthetic data actually do.
That paper gets a lot of attention because the schadenfreude of models destroying themselves through eating their own tails is irresistible.
Agreed, especially when in this context of training a smaller model on a larger model’s outputs. Distillation is generally accepted as an effective technique.
This is exactly what I did in a previous role, fine-tuning Llama and Mistral models on a mix of human and GPT-4 data for a domain-specific task. Adding (good) synthetic data definitely increased the output quality for our tasks.
Meta: The inclusion of the current year ("(2025)") in the title is strange, even though it's in the actual title of the linked-to post, repeating it here makes me look around for the time machine controls.
This is a bad article. Some of the information is wrong, and it's missing lots of context.
For example, it somehow merged Llama 4 Maverick's custom Arena chatbot version with Behemoth, falsely claiming that the former is stopping the latter from being released. It also claims 40B of internet text data is 10B tokens, which seems a little odd. Llama 405B was also trained on more than 15 trillion tokens[1], but the post claims only 3.67 trillion for some reason. It also doesn't mention Mistral large for some reason, even though it's the first good European 100B+ dense model.
>The MoE arch. enabled larger models to be trained and used by more people - people without access to thousands of interconnected GPUs
You still need thousands of GPUs to train a MoE model of any actual use. This is true for inference in the sense that it's faster I guess, but even that has caveats because MoE models are less powerful than dense models of the same size, though the trade-off has apparently been worth it in many cases. You also didn't need thousands of GPUs to do inference before, even for the largest models.
The conclusion is all over the place, and has lots of just weird and incorrect implications. The title is about how big LLMs are, why is there such a focus on token training count? Also no mention of quantized size. This is a bad AI slop article (whoops, turns out the author accidentally said it was AI generated, so it's a bad human slop article).
> it somehow merged Llama 4 Maverick's custom Arena chatbot version with Behemoth
I can clarify this part. I wrote 'There was a scandal as facebook decided to mislead people by gaming the lmarena benchmark site - they served one version of llama-4 there and released a different model' which is true.
But it is inside the section about the llama 4 model behemoth. So I see how that could be confusing/misleading.
I could restructure that section a little to improve it.
> Llama 405B was also trained on more than 15 trillion tokens[1],
You're talking about Llama 405B instruct, I'm talking about Llama 405B base. Of course the instruct model has been traiend on more tokens.
> why is there such a focus on token training count?
I tried to include the rough training token count for each model I wrote about - plus additional details about training data mixture if available. Training data is an important part of an LLM.
ljoshua|8 months ago
rain1|8 months ago
When you train it to be an assistant model, it's better at compressing assistant transcripts than it is general text.
There is an eval which I have a lot of interested in and respect for https://huggingface.co/spaces/Jellyfish042/UncheatableEval called UncheatableEval, which tests how good of a language model an LLM is by applying it on a range of compression tasks.
This task is essentially impossible to 'cheat'. Compression is a benchmark you cannot game!
exe34|8 months ago
To me the amazing thing is that you can tell the model to do something, even follow simple instructions in plain English, like make a list or write some python code to do $x, that's the really amazing part.
thecosas|8 months ago
Lots of various sources that you can download locally to have available offline. They're even providing some pre-loaded devices in areas where there may not be reliable or any internet access.
nico|8 months ago
> The English Wikipedia, as of June 26, 2025, contains over 7 million articles and 63 million pages. The text content alone is approximately 156 GB, according to Wikipedia's statistics page. When including all revisions, the total size of the database is roughly 26 terabytes (26,455 GB)
tasuki|8 months ago
It is 64,800,000,000 bits.
I can imagine 100 bits sure. And 1,000 bits why not. 10,000 you lose me. A million? That sounds like a lot. Now 64 million would be a number I can't well imagine. And this is a thousand times 64 million!
swyx|8 months ago
divbzero|8 months ago
[1]: https://en.wikipedia.org/wiki/Encyclopædia_Britannica
agumonkey|8 months ago
dgrabla|8 months ago
Wowfunhappy|8 months ago
mr_toad|8 months ago
unknown|8 months ago
[deleted]
ysofunny|8 months ago
the self-execution is the interactive chat interface.
wikipedia gets "trained" (compiled+compressed+lossy) into an executable you can chat with, you can pass this through another pretrained A.I. than can talk out the text or transcribe it.
I think writing compilers is now an officially a defunct skill of historical and conservation purposes more than anything else; but I don't like saying "conservation", it's a bad framing, I rather say "legacy connectivity" which is a form of continuity or backwards compatibility
Nevermark|8 months ago
One factor, is the huge redundancies pervasive in our communication.
(1) There are so many ways to say the same thing, that (2) we have to add even more words to be precise at all. Without a verbal indexing system we (3) spend many words just setting up context for what we really want to say. And finally, (4) we pervasively add a great deal of intentionally non-informational creative and novel variability, and mood inducing color, which all require even more redundancy to maintain reliable interpretation, in order to induce our minds to maintain attention.
Our minds are active resistors of plain information!
All four factors add so much redundancy, it’s probably fair to say most of our communication (by bits, characters, words, etc., may be 95%?, 98%? or more!) pure redundancy.
Another helpful compressor, is many facts are among a few “reasonably expected” alternative answers. So it takes just a little biasing information to encode the right option.
Finally, the way we reason seems to be highly common across everything that matters to us. Even though we have yet to identify and characterize this informal human logic. So once that is modeled, that itself must compress a lot of relations significantly.
Fuzzy Logic was a first approximation attempt at modeling human “logic”. But has not been very successful.
Models should eventually help us uncover that “human logic”, by analyzing how they model it. Doing so may let us create even more efficient architectures. Perhaps significantly more efficient, and even provide more direct non-gradient/data based “thinking” design.
Nevertheless, the level of compression is astounding!
We are far less complicated cognitive machines that we imagine! Scary, but inspiring too.
I personally believe that common PCs of today, maybe even high end smart phones circa 2025, will be large enough to run future super intelligence when we get it right, given internet access to look up information.
We have just begun to compress artificial minds.
holoduke|8 months ago
unknown|8 months ago
[deleted]
ljlolel|8 months ago
unknown|8 months ago
[deleted]
stronglikedan|8 months ago
tomkaos|8 months ago
pinoy420|8 months ago
[deleted]
Workaccount2|8 months ago
Although strictly speaking they have lots of information in a small package, they are F-tier compression algorithms because the loss is bad, unpredictable, and undetectable (i.e. a human has to check it). You would almost never use a transformer in place of any other compression algorithm for typical data compression uses.
mjburgess|8 months ago
All digitized books ever written/encoded compress to a few TB. The public web is ~50TB. I think a usable zip of all english electronic text publicly available would be on O(100TB). So we're at about 1% of that in model size, and we're in a diminishing-returns area of training -- ie., going to >1% has not yielded improvements (cf. gpt4.5 vs 4o).
This is why compute spend is moving to inference time with "reasoning" models. It's likely we're close to diminshing returns on inference-time compute now too, hence agents whereby (mostly,) deterministic tools are supplementing information /capability into the system.
I think to get any more value out of this model class, we'll be looking at domain-specific specialisation beyond instruction fine-tuning.
I'd guess targeting 1TB inference-time VRAM would be a reasonable medium-term target for high quality open source models -- that's within the reach of most SMEs today. That's about 250bn params.
smokel|8 months ago
After that, make the robots explore and interact with the world by themselves, to fetch even more data.
In all seriousness, adding image and interaction data will probably be enormously useful, even for generating text.
layer8|8 months ago
fouc|8 months ago
account-5|8 months ago
Where you getting these numbers from? Interested to see how that's calculated.
I read somewhere, but cannot find the source anymore, that all written text prior to this century was approx 50MB. (Might be misquoted as don't have source anymore).
andrepd|8 months ago
There's no way the entire Web fits in 400$ worth of hard drives.
camel-cdr|8 months ago
I tied to estimate how much data this actually is:
So uncompressed ~30 TB and compressed ~5.5 TB of data.That fits on three 2TB micro SD cards, which you could buy for a total of 750$ from SanDisk.
charcircuit|8 months ago
Did you mean to type EB?
rain1|8 months ago
generalizations|8 months ago
FWIW there is a huge difference between 4.5 and 4o.
kamranjon|8 months ago
rain1|8 months ago
You're right the T5 stuff is very important historically but they're below 11B and I don't have much to say about them. Definitely a very interesting and important set of models though.
fossa1|8 months ago
Turns out, size really did matter, at least at the base model level. Only with the release of truly massive dense (405B) or high-activation MoE models (DeepSeek V3, DBRX, etc) did we start seeing GPT-4-level reasoning emerge outside closed labs.
stared|8 months ago
rain1|8 months ago
rain1|8 months ago
OtherShrezzing|8 months ago
I think in these scenarios, articles should include the prompt and generating model.
rain1|8 months ago
Thank you for spotting the error.
kylecazar|8 months ago
There are some signs it's written by possibly a non-native speaker.
WesolyKubeczek|8 months ago
oc1|8 months ago
lukeschlather|8 months ago
That said, there's an unstated assumption here that these truly large language models are the most interesting thing. The big players have been somewhat quiet but my impression from the outside is that OpenAI let a little bit leak with their behavior. They built an even larger model and it turned out to be disappointing so they quietly discontinued it. The most powerful frontier reasoning models may actually be smaller than the largest publicly available models.
dale_glass|8 months ago
Something like 1.61B just doesn't mean much to me since I don't know much about the guts of LLMs. But I'm curious about how that translates to computer hardware -- what specs would I need to run these? What could I run now, what would require spending some money, and what I might hope to be able to run in a decade?
mjburgess|8 months ago
loudmax|8 months ago
In practice, models can be quantized to smaller weights for inference. Usually, the performance loss going from 16 bit weights to 8 bit weights is very minor, so a 1 billion parameter model can take 1 gigabyte. Thinking about these models in terms of 8-bit quantized weights has the added benefit of making the math really easy. A 20B model needs 20G of memory. Simple.
Of course, models can be quantized down even further, at greater cost of inference quality. Depending on what you're doing, 5-bit weights or even lower might be perfectly acceptable. There's some indication that models that have been trained on lower bit weights might perform better than larger models that have been quantized down. For example, a model that was trained using 4-bit weights might perform better than a model that was trained at 16 bits, then quantized down to 4 bits.
When running models, a lot of the performance bottleneck is memory bandwidth. This is why LLM enthusiasts are looking for GPUs with the most possible VRAM. You computer might have 128G of RAM, but your GPU's access to that memory is so constrained by bandwidth that you might as well run the model on your CPU. Running a model on the CPU can be done, it's just much slower because the computation is so parallel.
Today's higher end consumer grade GPUs have up to 24G of dedicated VRAM (an Nvidia RTX 5090 has 32G of VRAM and they're like $2k). The dedicated VRAM on a GPU has a memory bandwidth of about 1 Tb/s. Apple's M-series of ARM-based CPU's have 512 Gb/s of bandwidth, and they're one of the most popular ways of being able to run larger LLMs on consumer hardware. AMD's new "Strix Halo" CPU+GPU chips have up to 128G of unified memory, with a memory bandwidth of about 256 Gb/s.
Reddit's r/LocalLLaMA is a reasonable place to look to see what people are doing with consumer grade hardware. Of course, some of what they're doing is bonkers so don't take everything you see there as a guide.
And as far as a decade from now, who knows. Currently, the top silicon fabs of TSMC, Samsung, and Intel are all working flat-out to meet the GPU demand from hyperscalers rolling out capacity (Microsoft Azure, AWS, Google, etc). Silicon chip manufacturing has traditionally followed a boom/bust cycle. But with geopolitical tensions, global trade barriers, AI-driven advances, and whatever other black swan events, what the next few years will look like is anyone's guess.
ethan_smith|8 months ago
1vuio0pswjnm7|8 months ago
https://gist.github.com/rain-1/cf0419958250d15893d8873682492...
2. "superintelligence"
https://en.m.wikipedia.org/wiki/Superintelligence
"Meta is uniquely positioned to deliver superintelligence to the world."
https://www.cnbc.com/2025/06/30/mark-zuckerberg-creating-met...
Is there any difference between 1 and 2
Yes. One is purely hypothetical
angusturner|8 months ago
There is kind of a vague sense in which this metaphor holds, but there is a much more interesting and rigorous fact about LLMs which is that they are also _lossless_ compression algorithms.
There are at least two senses in which this is true:
1. You can use an LLM to losslessly compress any piece of text at a cost that approaches the log-likelihood of that text under the model, using arithmetic coding. A sender and receiver both need a copy of the LLM weights.
2. You can use an LLM plus SGD (I.e the training code) as an lossless compression algorithm, where the communication cost is area under the training curve (and the model weights don’t count towards description length!) see: Jack Rae “compression for AGI”
actionfromafar|8 months ago
simonw|8 months ago
That parenthetical doesn't quite work for me.
If synthetic data always degraded performance, AI labs wouldn't use synthetic data. They use it because it helps them train better models.
There's a paper that shows that if you very deliberately train a model in its own output in a loop you can get worse performance. That's not what AI labs using synthetic data actually do.
That paper gets a lot of attention because the schadenfreude of models destroying themselves through eating their own tails is irresistible.
rybosome|8 months ago
This is exactly what I did in a previous role, fine-tuning Llama and Mistral models on a mix of human and GPT-4 data for a domain-specific task. Adding (good) synthetic data definitely increased the output quality for our tasks.
unwind|8 months ago
bobsmooth|8 months ago
christianqchung|8 months ago
For example, it somehow merged Llama 4 Maverick's custom Arena chatbot version with Behemoth, falsely claiming that the former is stopping the latter from being released. It also claims 40B of internet text data is 10B tokens, which seems a little odd. Llama 405B was also trained on more than 15 trillion tokens[1], but the post claims only 3.67 trillion for some reason. It also doesn't mention Mistral large for some reason, even though it's the first good European 100B+ dense model.
>The MoE arch. enabled larger models to be trained and used by more people - people without access to thousands of interconnected GPUs
You still need thousands of GPUs to train a MoE model of any actual use. This is true for inference in the sense that it's faster I guess, but even that has caveats because MoE models are less powerful than dense models of the same size, though the trade-off has apparently been worth it in many cases. You also didn't need thousands of GPUs to do inference before, even for the largest models.
The conclusion is all over the place, and has lots of just weird and incorrect implications. The title is about how big LLMs are, why is there such a focus on token training count? Also no mention of quantized size. This is a bad AI slop article (whoops, turns out the author accidentally said it was AI generated, so it's a bad human slop article).
[1] https://ai.meta.com/blog/meta-llama-3-1/
rain1|8 months ago
> it somehow merged Llama 4 Maverick's custom Arena chatbot version with Behemoth
I can clarify this part. I wrote 'There was a scandal as facebook decided to mislead people by gaming the lmarena benchmark site - they served one version of llama-4 there and released a different model' which is true.
But it is inside the section about the llama 4 model behemoth. So I see how that could be confusing/misleading.
I could restructure that section a little to improve it.
> Llama 405B was also trained on more than 15 trillion tokens[1],
You're talking about Llama 405B instruct, I'm talking about Llama 405B base. Of course the instruct model has been traiend on more tokens.
> why is there such a focus on token training count?
I tried to include the rough training token count for each model I wrote about - plus additional details about training data mixture if available. Training data is an important part of an LLM.