Mostly SOTA performance at the 3B level. A notable addition to the small but truly open club of models that provide full disclosure, code, recipes to reproduce their work.
Looks like ballpark a million dollars of GPU time if you want to train up one for yourself (4000 gpus/24 days).
Very nice write up that’s generous in sharing their learnings.
I spent about 10 minutes this AM cross-checking with Phi-4-mini benchmarks, as it was very odd to not include the leader in benchmarks and it seemed universally behind.
For context, I dev an LLM client, a core tenant is keeping local as close to cloud parity as much as is possible. (via llama.cpp)
Companies aren't taking local AI seriously on a sustained basis outside Microsoft.
Overall, I usually would bite my tongue. HF is a great citizen, and I doubt this'll be a one off. However, when I see superlatives affirmed, while leaving out the local SoTA for many many moons that is a godsend in this sector, I think it is good to, rather than shy away, stand up and say this.
It's small (3B) and does great on benchmarks. This is a model for edge / mobile deployments so the gains over gemma3-4b are meaningful. It has dual mode reasoning / non_reasoning AND they released the full training method:
> We're releasing SmolLM3 with our engineering blueprint. It includes architecture details, exact data mixtures showing how we progressively boost performance across domains in a three-stage pretraining approach, and the methodology for building a hybrid reasoning model. Usually, achieving these results would require months of reverse engineering. Instead, we're providing the full methodology.
I hate to say it, but reasoning models simply aren't suited for edge computing. I just ran some tests on this model and even at 4bit weight quantisation it blows past 10GB of VRAM with just ~1000 tokens while it is still reasoning. So even if you're running on a dedicated ML edge device like a $250 Jetson, you will run out of memory before the model even formulates a real answer. You'll need a high end GPU to make full use of it for limited answers and an enterprise grade system to support longer contexts. And with reasoning turned off I don't see any meaningful improvement over older models.
So this is primarily great for enterprises who want to do on-prem with limited budgets and maybe high-end enthusiasts.
Wow. Close to a Qwen3 distill with 75% the size. That's great!
I've been using the smollm base models for my own finetunes just because they're so high quality, it looks like I might be using them to drive local agents/code completion in the near future too.
Their RL algorithm looks interesting. I'm still using OpenAI's algorithm for my stuff, I've been meaning to check on the SoTA since I know my code is pretty outdated (It's crazy how fast that happens with this stuff.)
This seems to be a persistent issue with almost all weight releases, even from bigger companies like Meta.
Are the people who release these weights not testing them in various inference engines? Seems they make it work with Huggingface's Transformers library, then call it a day, but sometimes not even that.
Which small model is good for fine tuning to various enterprise data sets? Our business units are wanting to run small models in browser and on mobile devices, without dealing with RAG and cloud resources.
Small models are bad at knowing things. Trying to train knowledge in to small models is probably not the way you want to go. You could try building an offline embedded RAG system that is deployable as wasm. Some folks have been experiencing success with this.
You really need to try them all out yourself and make sure you have proper benchmarks.
While machine learning is not my field, I've tried to finetune Mistral 7B (following their official guide and toolset) and the results did not satisfy. Had a few very specific questions from the dataset that no matter how much I've finetuned and tweaked the process it was not able to respond with correct information.
A mix of vector search + keyword search is still better at building the right question context than expecting it to learn all the information.
I've used the pretrained dataset approach. Maybe building syntethic questions and answers around the dataset yields better results but I didn't have time to experiment with that approach.
Bite the bullet and do some kind of RAG; you need to provide clear, authoritative information to a model that is skilled enough to remix it for the user.
Tuning the model to imitate the dataset will damage the model's skills and "common sense" but won't train it reliably recall information.
I have fine-tuned Gemma 3N 2B and it's pretty good, but loads slow on my S23U, once it's loaded though, it works fine
Also tried SmolVLM 256M and 500M, they load faster and you can embed them in assets, they work if you know what you're doing
Just keep in mind that smaller models don't perform as well due to their limited parameters
Also on Android, since you can't ship files larger than 2GB due to Java compression issues, you need to download models separately, then you can't load the model from the download folder, you have to copy it into the app's own folder, this means a Gemma 3N 2B model that's 3.14 GB would need at least 7 GB of free space on the user's phone
I'm having trouble running this on my Mac - I've tried Ollama and llama.cpp llama-server so far, both using GGUFs from Hugging Face, but neither worked.
(llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'smollm3')
I've managed to run it using Python and transformers with PyTorch in device="cpu" mode but unsurprisingly that's really slow - it took 35s to respond to "say hi"!
Anyone had success with this on a Mac yet? I really want to get this running with tool calling, ideally via an OpenAI-compatible serving layer like llama-server.
Hey Simon, VB from Hugging Face here and the person who added the model to MLX and llama.cpp (with Son). The PR hasn't yet landed on llama.cpp, hence it doesn't work OTB on llama.cpp installed via brew (similarly doesn't work with ollama since they need to bump their llama.cpp runtime)
The vocabulary size is fairly small (128,256) for a multilingual model. I would guess it doesn't require many additional parameters to support these 5 languages as many tokens can be shared.
Typically, multilingual capabilities consume 20-30% of model parameters in small LLMs, primarily in token embeddings and early transformer layers. Monolingual variants of similar models often perform better on English benchmarks with the same parameter count.
It’s interesting that it looks like they didn’t apply their own RL to the model, and instead fine tuned on reasoning traces from large datasets and generating reasoning traces from larger models
Indeed we opted for offline methods like Anchored Preference Optimization as we found in the Open R1 project that doing multi-task RL on small models is quite a hassle to get right. With offline methods, you focus much more on dataset curation / generation, but that still provides faster iteration cycles for the model scale we’re dealing with!
Looks like it's the 3B models that are being shipped out to on device by default. Apple's on-device LLM is 3B, and I believe Canary is shipping Google nano:
From what I've heard, the llama3 models are fairly easy to fine-tune (please correct me if I'm wrong or if there are more amenable models here). How easy is it to finetune smollm3? I know a lot of the MoE LLMs have been quite fickle in this regard.
I've tried to use gemma3:4b which comes up better in that benchmark and found it to be quite disappointing. It breaks a lot, sucks even worse than qwen2.5-coder:7b and incept5/llama3.1-claude:7b at code, needs to be tricked or threatened into saying stuff about many everyday topics. It also commonly chugs away for minutes exercising the GPU fans before responding, at which point I'm already ahead because I figured out another way to solve my problem or get at some information.
My experience with phi4-mini and granite3.3 was about the same, and they annoy me even more when I hook them into code editors and try to get them to contribute to my work. For one because they're slow, and at best they suggest adding unnecessary error handling in the style of null checks everywhere, at worst they just start mixing or hallucinating programming languages. Where they would be useful as leverage if they worked, i.e. close to the edge of where I can debug and refactor without getting stuck, they just go into straight nonsense mode, especially on terse first-pass code.
Sometimes I've tried to query these things for descriptions of recent history in foreign countries, Wikipedia trivia basically, and they're very often wrong in subtle ways. For example, a politician might have been at it for half a century or so in a troubled country and because they've been ousted in a coup once in the eighties the model is absolutely sure they can't have been in office since.
If a person acted like these things do I'd wish for them to get immediate institutional care. Maybe the problem is somehow with me, but I have a deep suspicion it's not.
Great to see Huggingface stick to their guns with CodeEval and python tooling. Agentic turn-by-turn tool calling is fine and all, but we're underutilising their ability to write an execute code in an "agent-like" environment.
Standards have shifted as well. Gpt2 used to be considered “large” but it is half the size of this. Oh and also Sam Altman said it was too dangerous to release. At this point I consider anything too big to run on consumer grade hardware to be large, but an exact definition is a little silly to argue about.
[+] [-] WhitneyLand|8 months ago|reply
Looks like ballpark a million dollars of GPU time if you want to train up one for yourself (4000 gpus/24 days).
Very nice write up that’s generous in sharing their learnings.
This is a solid and positive contribution.
[+] [-] YetAnotherNick|8 months ago|reply
[+] [-] refulgentis|8 months ago|reply
For context, I dev an LLM client, a core tenant is keeping local as close to cloud parity as much as is possible. (via llama.cpp)
Companies aren't taking local AI seriously on a sustained basis outside Microsoft.
Overall, I usually would bite my tongue. HF is a great citizen, and I doubt this'll be a one off. However, when I see superlatives affirmed, while leaving out the local SoTA for many many moons that is a godsend in this sector, I think it is good to, rather than shy away, stand up and say this.
[+] [-] gardnr|8 months ago|reply
> We're releasing SmolLM3 with our engineering blueprint. It includes architecture details, exact data mixtures showing how we progressively boost performance across domains in a three-stage pretraining approach, and the methodology for building a hybrid reasoning model. Usually, achieving these results would require months of reverse engineering. Instead, we're providing the full methodology.
[+] [-] sigmoid10|8 months ago|reply
So this is primarily great for enterprises who want to do on-prem with limited budgets and maybe high-end enthusiasts.
[+] [-] msgodel|8 months ago|reply
I've been using the smollm base models for my own finetunes just because they're so high quality, it looks like I might be using them to drive local agents/code completion in the near future too.
Their RL algorithm looks interesting. I'm still using OpenAI's algorithm for my stuff, I've been meaning to check on the SoTA since I know my code is pretty outdated (It's crazy how fast that happens with this stuff.)
[+] [-] danielhanchen|8 months ago|reply
./llama.cpp/llama-cli -hf unsloth/SmolLM3-3B-GGUF:Q4_K_XL --jinja -ngl 99
[+] [-] diggan|8 months ago|reply
This seems to be a persistent issue with almost all weight releases, even from bigger companies like Meta.
Are the people who release these weights not testing them in various inference engines? Seems they make it work with Huggingface's Transformers library, then call it a day, but sometimes not even that.
[+] [-] segmondy|8 months ago|reply
[+] [-] _1|8 months ago|reply
[+] [-] gardnr|8 months ago|reply
[+] [-] mhitza|8 months ago|reply
While machine learning is not my field, I've tried to finetune Mistral 7B (following their official guide and toolset) and the results did not satisfy. Had a few very specific questions from the dataset that no matter how much I've finetuned and tweaked the process it was not able to respond with correct information.
A mix of vector search + keyword search is still better at building the right question context than expecting it to learn all the information.
I've used the pretrained dataset approach. Maybe building syntethic questions and answers around the dataset yields better results but I didn't have time to experiment with that approach.
[+] [-] thatjoeoverthr|8 months ago|reply
Bite the bullet and do some kind of RAG; you need to provide clear, authoritative information to a model that is skilled enough to remix it for the user.
Tuning the model to imitate the dataset will damage the model's skills and "common sense" but won't train it reliably recall information.
[+] [-] simonw|8 months ago|reply
[+] [-] netdur|8 months ago|reply
Also tried SmolVLM 256M and 500M, they load faster and you can embed them in assets, they work if you know what you're doing
Just keep in mind that smaller models don't perform as well due to their limited parameters
Also on Android, since you can't ship files larger than 2GB due to Java compression issues, you need to download models separately, then you can't load the model from the download folder, you have to copy it into the app's own folder, this means a Gemma 3N 2B model that's 3.14 GB would need at least 7 GB of free space on the user's phone
[+] [-] gdiamos|8 months ago|reply
I hope you continue the 50-100M parameter models.
I think there is a case for models that finish fast on CPUs in solve by llm test cases.
[+] [-] nateb2022|8 months ago|reply
[+] [-] simonw|8 months ago|reply
(llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'smollm3')
I've managed to run it using Python and transformers with PyTorch in device="cpu" mode but unsurprisingly that's really slow - it took 35s to respond to "say hi"!
Anyone had success with this on a Mac yet? I really want to get this running with tool calling, ideally via an OpenAI-compatible serving layer like llama-server.
[+] [-] reach-vb|8 months ago|reply
The easiest would be to install llama.cpp from source: https://github.com/ggml-org/llama.cpp
If you want to avoid it, I added SmolLM3 to MLX-LM as well:
You can run it via `mlx_lm.chat --model "mlx-community/SmolLM3-3B-bf16"`
(requires the latest mlx-lm to be installed)
here's the MLX-lm PR if you're interested: https://github.com/ml-explore/mlx-lm/pull/272
similarly, llama.cpp here: https://github.com/ggml-org/llama.cpp/pull/14581
Let me know if you face any issues!
[+] [-] tripplyons|8 months ago|reply
[+] [-] tiahura|8 months ago|reply
[+] [-] rockinghigh|8 months ago|reply
[+] [-] ethan_smith|8 months ago|reply
[+] [-] netdur|8 months ago|reply
[+] [-] BarakWidawsky|8 months ago|reply
[+] [-] lewtun|8 months ago|reply
[+] [-] ivape|8 months ago|reply
https://developer.chrome.com/docs/ai/rewriter-api
[+] [-] eachro|8 months ago|reply
[+] [-] cess11|8 months ago|reply
My experience with phi4-mini and granite3.3 was about the same, and they annoy me even more when I hook them into code editors and try to get them to contribute to my work. For one because they're slow, and at best they suggest adding unnecessary error handling in the style of null checks everywhere, at worst they just start mixing or hallucinating programming languages. Where they would be useful as leverage if they worked, i.e. close to the edge of where I can debug and refactor without getting stuck, they just go into straight nonsense mode, especially on terse first-pass code.
Sometimes I've tried to query these things for descriptions of recent history in foreign countries, Wikipedia trivia basically, and they're very often wrong in subtle ways. For example, a politician might have been at it for half a century or so in a troubled country and because they've been ousted in a coup once in the eighties the model is absolutely sure they can't have been in office since.
If a person acted like these things do I'd wish for them to get immediate institutional care. Maybe the problem is somehow with me, but I have a deep suspicion it's not.
[+] [-] iamnotagenius|8 months ago|reply
[deleted]
[+] [-] lvl155|8 months ago|reply
[+] [-] ivape|8 months ago|reply
[+] [-] grrowl|8 months ago|reply
[+] [-] bitwize|8 months ago|reply
"So it's a small large language model?"
"Oh yes, very small."
"How can it be small and large at the same time?"
"Well, it's small by the standards of a large language model."
"So it's large."
"Oh yes, very large."
"Large compared to what?"
"Small language models."
"And so something like ChatGPT, what would that be exactly? A large large language model?"
"Yes, precisely. An LLLM."
[+] [-] janalsncm|8 months ago|reply
[+] [-] unknown|8 months ago|reply
[deleted]
[+] [-] _kb|8 months ago|reply
[+] [-] papichulo2023|8 months ago|reply
[+] [-] netdur|8 months ago|reply
[+] [-] unknown|8 months ago|reply
[deleted]