top | item 42405323

Phi-4: Microsoft's Newest Small Language Model Specializing in Complex Reasoning

439 points| lappa | 1 year ago |techcommunity.microsoft.com

143 comments

order

simonw|1 year ago

The most interesting thing about this is the way it was trained using synthetic data, which is described in quite a bit of detail in the technical report: https://arxiv.org/abs/2412.08905

Microsoft haven't officially released the weights yet but there are unofficial GGUFs up on Hugging Face already. I tried this one: https://huggingface.co/matteogeniaccio/phi-4/tree/main

I got it working with my LLM tool like this:

  llm install llm-gguf
  llm gguf download-model https://huggingface.co/matteogeniaccio/phi-4/resolve/main/phi-4-Q4_K_M.gguf
  llm chat -m gguf/phi-4-Q4_K_M
Here are some initial transcripts: https://gist.github.com/simonw/0235fd9f8c7809d0ae078495dd630...

More of my notes on Phi-4 here: https://simonwillison.net/2024/Dec/15/phi-4-technical-report...

tkellogg|1 year ago

I added Phi-4 to my reasoning model collection because it seems to exhibit reasoning behavior, it stopped to consider alternatives before concluding. I assume this is related to their choice in training data:

> Chain-of-Thought: Data should encourage systematic reasoning, teaching the model various approaches to the problems in a step-by-step manner.

https://github.com/tkellogg/lrm-reasoning/blob/main/phi4.md

mirekrusin|1 year ago

This "draw pelican riding on bicycle" is quite deep if you think about it.

Phi is all about synthetic training and prompt -> svg -> render -> evaluate image -> feedback loop feels like ideal fit for synthetic learning.

You can push it quite far with stuff like basic 2d physics etc with plotting scene after N seconds or optics/rays, magnetic force etc.

SVG as LLM window to physical world.

syntaxing|1 year ago

Wow, those responses are better than I expected. Part of me was expecting terrible responses since Phi-3 was amazing on paper too but terrible in practice.

lifeisgood99|1 year ago

The SVG created for the first prompt is valid but is a garbage image.

fisherjeff|1 year ago

Looks like someone’s finally caught up with The Hallmark Channel’s LLM performance

selcuka|1 year ago

I tried the same GGUF on LMStudio and it seems to sometimes leak its training data.

    Prompt:
    Are you based on OpenAI?

    Reply:
    No, I am not based on OpenAI's models; I am developed by Microsoft. While both systems are designed for natural language processing and understanding, we may have different architectures, training methodologies, or datasets. However, our shared goal is to provide helpful and informative responses while ensuring user safety and privacy.<|user|>
    Are you similar to ChatGPT?<|end|>

vergessenmir|1 year ago

When working with GGUF what chat templates do you use? Pretty much every gguf I've imported into ollama has given me garbage response. Converting the tokenizer json has yielded mixed results.

For example how do you handle the phi-4 models gguf chat template?

belter|1 year ago

> it was trained using synthetic data

Is this not supposed to cause Model collapse?

mhh__|1 year ago

Along those lines (synthetic data) I would keep an eye on the chinese labs given that they are probably quite data and compute constrained, in English at least.

Havoc|1 year ago

>Microsoft haven't officially released the weights

Thought it was official just not on huggingface but rather whatever azure competitor thing they're pushing?

patrick0d|1 year ago

this vibe check is more insightful to me than the popular evals. nice job!

thot_experiment|1 year ago

For prompt adherence it still fails on tasks that Gemma2 27b nails every time. I haven't been impressed with any of the Phi family of models. The large context is very nice, though Gemma2 plays very well with self-extend.

impossiblefork|1 year ago

It's a much smaller model though.

I think the point is more the demonstration that such a small model can have such good performance than any actual usefulness.

jacoblambda|1 year ago

Yeah they mention this in the weaknesses section.

> While phi-4 demonstrates relatively strong performance in answering questions and performing reasoning tasks, it is less proficient at rigorously following detailed instructions, particularly those involving specific formatting requirements.

xeckr|1 year ago

Looks like it punches way above its weight(s).

How far are we from running a GPT-3/GPT-4 level LLM on regular consumer hardware, like a MacBook Pro?

anon373839|1 year ago

We’re already past that point! MacBooks can easily run models exceeding GPT-3.5, such as Llama 3.1 8B, Qwen 2.5 8B, or Gemma 2 9B. These models run at very comfortable speeds on Apple Silicon. And they are distinctly more capable and less prone to hallucination than GPT-3.5 was.

Llama 3.3 70B and Qwen 2.5 72B are certainly comparable to GPT-4, and they will run on MacBook Pros with at least 64GB of RAM. However, I have an M3 Max and I can’t say that models of this size run at comfortable speeds. They’re a bit sluggish.

bloomingkales|1 year ago

M4 Mac mini 16gb for $500. It's literally an inferencing block (small too, fits in my palm). I feel like the whole world needs one.

lappa|1 year ago

It's easy to argue that Llama-3.3 8B performs better than GPT-3.5. Compare their benchmarks, and try the two side-by-side.

Phi-4 is yet another step towards a small, open, GPT-4 level model. I think we're getting quite close.

Check the benchmarks comparing to GPT-4o on the first page of their technical report if you haven't already https://arxiv.org/pdf/2412.08905

ActorNightly|1 year ago

Why would you want to though? You already can get free access to large LLMs and nobody is doing anything groundbreaking with them.

refulgentis|1 year ago

We're there, Llama 3.1 8B beats Gemini Advanced for $20/month. Telosnex with llama 3.1 8b GGUF from bartowski. https://telosnex.com/compare/ (How!? tl;dr: I assume Google is sandbagging and hasn't updated the underlying Gemini)

excerionsforte|1 year ago

Looks like someone converted it for Ollama use already: https://ollama.com/vanilj/Phi-4

accrual|1 year ago

I've had great success with quantized Phi-4 12B and Ollama so far. It's as fast as Llama 3.1 8B but the results have been (subjectively) higher quality. I copy/pasted some past requests into Phi-4 and found the answers were generally better.

_ea1k|1 year ago

I really like the ~3B param version of phi-3. It wasn't very powerful and overused memory, but was surprisingly strong for such a small model.

I'm not sure how I can be impressed by a 14B Phi-4. That isn't really small any more, and I doubt it will be significantly better than llama 3 or Mistral at this point. Maybe that will be wrong, but I don't have high hopes.

travisgriggs|1 year ago

Where have I been? What is a “small” language model? Wikipedia just talks about LLMs. Is this a sort of spectrum? Are there medium language models? Or is it a more nuanced classifier?

hagen_dogs|1 year ago

I think it came from this paper, TinyStories (https://arxiv.org/abs/2305.07759). iirc this was also the inspiration for the Phi family of models. The essential point (of the TinyStories paper), "if we train a model on text meant for 3-4 year olds, since that's much simpler shouldn't we need fewer parameters?" Which is correct. In the original they have a model that's 32 Million parameters and they compare it GPT-2 (1.5 Billion parameters) and the 32M model does much better. Microsoft has been interesed in this because "lower models == less resource usage" which means they can run on consumer devices. You can easily run TinyStories from your phone, which is presumably what Microsoft wants to do too.

dboreham|1 year ago

There are all sizes of models from a few GB to hundreds of GB. Small presumably means small enough to run on end-user hardware.

narag|1 year ago

7B vs 70B parameters... I think. The small ones fit in the memory of consumer grade cards. That's what I more or less know (waiting for my new computer to arrive this week)

tbrownaw|1 year ago

It's a marketing term for the idea that quality over quantity in training data will lead to smaller models that work as well as larger models.

mupuff1234|1 year ago

So we moved from "reasoning" to "complex reasoning".

I wonder what will be next month's buzzphrase.

TeMPOraL|1 year ago

> So we moved from "reasoning" to "complex reasoning".

Only from the perspective of those still complaining about the use of the term "reasoning", who now find themselves left behind as the world has moved on.

For everyone else, the phrasing change perfectly fits the technological change.

criddell|1 year ago

If you graded humanity on their reasoning ability, I wonder where these models would score?

I think once they get to about the 85th percentile, we could upgrade the phrase to advanced reasoning. I'm roughly equating it with the percentage of the US population with at least a master's degree.

zurfer|1 year ago

Model releases without comprehensive coverage of benchmarks make me deeply skeptical.

The worst was the gpt4o update in November. Basically a 2 liner on what it is better at and in reality it regressed in multiple benchmarks.

Here we just get MMLU, which is widely known to be saturated and knowing they trained on synthetic data, we have no idea how much "weight" was given to having MMLU like training data.

Benchmarks are not perfect, but they give me context to build upon. ---

edit: the benchmarks are covered in the paper: https://arxiv.org/pdf/2412.08905

PoignardAzur|1 year ago

Saying that a 14B model is "small" feels a little silly at this point. I guess it doesn't require a high-end graphics card?

ai_biden|1 year ago

I'm not too excited by Phi-4 benchmark results - It is#BenchmarkInflation.

Microsoft Research just dropped Phi-4 14B, an open-source model that’s turning heads. It claims to rival Llama 3.3 70B with a fraction of the parameters — 5x fewer, to be exact.

What’s the secret? Synthetic data. -> Higher quality, Less misinformation, More diversity

But the Phi models always have great benchmark scores, but they always disappoint me in real-world use cases.

Phi series is famous for to be trained on benchmarks.

I tried again with the hashtag#phi4 through Ollama - but its not satisfactory.

To me, at the moment - IFEval is the most important llm benchmark.

But look the smart business strategy of Microsoft:

have unlimited access to gpt-4 the input prompt it to generate 30B tokens train a 1B parameter model call it phi-1 show benchmarks beating models 10x the size never release the data never detail how to generate the data( this time they told in very high level) claim victory over small models

liminal|1 year ago

Is 14B parameters still considered small?

parmesean|1 year ago

13.8 epochs of the benchmarks?