top | item 42853741

Qwen2.5-Max: Exploring the intelligence of large-scale MoE model

118 points| rochoa | 1 year ago |qwenlm.github.io

30 comments

order

Jackson__|1 year ago

>Many critical details regarding this scaling process were only disclosed with the recent release of DeepSeek V3

And so they decide to not disclose their own training information just after they told everyone how useful it was to get Deepseeks? Honestly can't say I care about "nearly as good as o1" when its a closed API with no additional info.

voxgen|1 year ago

It's not even "nearly as good as o1". They only compared to the older 4o.

You can safely assume Qwen2.5-Max will score worse than all of the recent reasoning models (o1, DeepSeek-R1, Gemini 2.0 Flash Thinking).

It'll probably become a very strong model if/when they apply RL training for reasoning. However, all the successful recipes for this are closed source, so it may take some time. They could do SFT based on another model's reasoning chains in the meantime, though the DeepSeek-R1 technical report noted that it's not as good as RL training.

kragen|1 year ago

I thought there were three DeepSeek items on the HN front page, but this turned out to be a fourth one, because it's the Qwen team saying they have a secret version of Qwen that's actually better than DeepSeek-V3.

I don't remember the last time 20% of the HN front page was about the same thing. Then again, nobody remembers the last time a company's market cap fell by 569 billion dollars like NVIDIA did yesterday.

kragen|1 year ago

Somehow I failed to notice that 4 รท 30 is not 20%. It's more like 13%. That was a dumb mistake.

caycep|1 year ago

it's a scaling law for stocks!

ecshafer|1 year ago

A Chinese company announcing this on Spring Festival eve, that is very surprising. The deep seek announcement must have put a fire under them. I am surprised anything is being done right now in these Chinese tech companies.

rfoo|1 year ago

Well, DeepSeek engineers are (desperately) fire-fighting as they don't have nearly as much capacity as needed. Competitors either already rushed release or decided to do an hush release of whatever they had in the pipeline. Sounds like everyone is working L

lostmsu|1 year ago

It's like when Gemini topped Chatbot Arena Leaderboard, and OpenAI released a model next day.

simonw|1 year ago

This appears to be Qwen's new best model, API only for the moment, which they say is better than DeepSeek v3.

Havoc|1 year ago

Kinda ambivalent about MoE in cloud. Where it could really shine though is in desktop class gear. Memory is starting to get fast enough where we might see MoEs being not painfully slow soon for large-ish models.

alecco|1 year ago

No weights, no proof.

Tiberium|1 year ago

Would you say the same for OpenAI releasing new models?

mohsen1|1 year ago

This is not the reasoning model. If they beat Deepseek V3 in benchmarks I think a 'reasoning' model would beat o1 Pro

GaggiX|1 year ago

Now they need to finetune it like R1 and o1 and it will be competitive with SOTA models.

jondwillis|1 year ago

The significance of _all_ of these releases at once is not lost on me. But the reason for it is lost on me. Is there some convention? Is this political? Business strategy?

logicchains|1 year ago

Today is the last day before the Chinese New Year.

k__|1 year ago

Alibaba probably doesn't want DeepSeek to get all the fame.

halJordan|1 year ago

Sometimes a cigar is just a cigar

a_wild_dandan|1 year ago

> We evaluate Qwen2.5-Max alongside leading models

> [...] we are unable to access the proprietary models such as GPT-4o and Claude-3.5-Sonnet. Therefore, we evaluate Qwen2.5-Max against DeepSeek V3

"We'll compare our proprietary model to other proprietary models. Except when we don't. Then we'll compare to non-proprietary models."