>Many critical details regarding this scaling process were only disclosed with the recent release of DeepSeek V3
And so they decide to not disclose their own training information just after they told everyone how useful it was to get Deepseeks? Honestly can't say I care about "nearly as good as o1" when its a closed API with no additional info.
It's not even "nearly as good as o1". They only compared to the older 4o.
You can safely assume Qwen2.5-Max will score worse than all of the recent reasoning models (o1, DeepSeek-R1, Gemini 2.0 Flash Thinking).
It'll probably become a very strong model if/when they apply RL training for reasoning. However, all the successful recipes for this are closed source, so it may take some time. They could do SFT based on another model's reasoning chains in the meantime, though the DeepSeek-R1 technical report noted that it's not as good as RL training.
I thought there were three DeepSeek items on the HN front page, but this turned out to be a fourth one, because it's the Qwen team saying they have a secret version of Qwen that's actually better than DeepSeek-V3.
I don't remember the last time 20% of the HN front page was about the same thing. Then again, nobody remembers the last time a company's market cap fell by 569 billion dollars like NVIDIA did yesterday.
A Chinese company announcing this on Spring Festival eve, that is very surprising. The deep seek announcement must have put a fire under them. I am surprised anything is being done right now in these Chinese tech companies.
Well, DeepSeek engineers are (desperately) fire-fighting as they don't have nearly as much capacity as needed. Competitors either already rushed release or decided to do an hush release of whatever they had in the pipeline. Sounds like everyone is working L
Kinda ambivalent about MoE in cloud. Where it could really shine though is in desktop class gear. Memory is starting to get fast enough where we might see MoEs being not painfully slow soon for large-ish models.
The significance of _all_ of these releases at once is not lost on me. But the reason for it is lost on me. Is there some convention? Is this political? Business strategy?
Jackson__|1 year ago
And so they decide to not disclose their own training information just after they told everyone how useful it was to get Deepseeks? Honestly can't say I care about "nearly as good as o1" when its a closed API with no additional info.
voxgen|1 year ago
You can safely assume Qwen2.5-Max will score worse than all of the recent reasoning models (o1, DeepSeek-R1, Gemini 2.0 Flash Thinking).
It'll probably become a very strong model if/when they apply RL training for reasoning. However, all the successful recipes for this are closed source, so it may take some time. They could do SFT based on another model's reasoning chains in the meantime, though the DeepSeek-R1 technical report noted that it's not as good as RL training.
kragen|1 year ago
I don't remember the last time 20% of the HN front page was about the same thing. Then again, nobody remembers the last time a company's market cap fell by 569 billion dollars like NVIDIA did yesterday.
kragen|1 year ago
caycep|1 year ago
BhavdeepSethi|1 year ago
Source: https://x.com/Alibaba_Qwen/status/1884263157574820053
ecshafer|1 year ago
rfoo|1 year ago
lostmsu|1 year ago
simonw|1 year ago
rochoa|1 year ago
zone411|1 year ago
https://github.com/lechmazur/nyt-connections/
unknown|1 year ago
[deleted]
Havoc|1 year ago
alecco|1 year ago
Tiberium|1 year ago
mohsen1|1 year ago
GaggiX|1 year ago
jondwillis|1 year ago
logicchains|1 year ago
k__|1 year ago
halJordan|1 year ago
bigcat12345678|1 year ago
a_wild_dandan|1 year ago
> [...] we are unable to access the proprietary models such as GPT-4o and Claude-3.5-Sonnet. Therefore, we evaluate Qwen2.5-Max against DeepSeek V3
"We'll compare our proprietary model to other proprietary models. Except when we don't. Then we'll compare to non-proprietary models."