top | item 44801016

(no title)

Leary | 6 months ago

GPQA Diamond: gpt-oss-120b: 80.1%, Qwen3-235B-A22B-Thinking-2507: 81.1%

Humanity’s Last Exam: gpt-oss-120b (tools): 19.0%, gpt-oss-120b (no tools): 14.9%, Qwen3-235B-A22B-Thinking-2507: 18.2%

discuss

order

jasonjmcghee|6 months ago

Wow - I will give it a try then. I'm cynical about OpenAI minmaxing benchmarks, but still trying to be optimistic as this in 8bit is such a nice fit for apple silicon

modeless|6 months ago

Even better, it's 4 bit

amarcheschi|6 months ago

Glm 4.5 seems on par as well

thegeomaster|6 months ago

GLM-4.5 seems to outperform it on TauBench, too. And it's suspicious OAI is not sharing numbers for quite a few useful benchmarks (nothing related to coding, for example).

One positive thing I see is the number of parameters and size --- it will provide more economical inference than current open source SOTA.

lcnPylGDnU4H9OF|6 months ago

Was the Qwen model using tools for Humanity's Last Exam?