top | item 47201622

(no title)

vlovich123 | 1 day ago

The hardware difference explains runtime performance differences, not task performance.

Speculation is that the frontier models are all below 200B parameters but a 2x size difference wouldn’t fully explain task performance differences

discuss

order

nl|20 hours ago

> Speculation is that the frontier models are all below 200B parameters

Some versions of some the models are around that size, which you might hit for example with the ChatGPT auto-router.

But the frontier models are all over 1T parameters. Source: watch interview with people who have left one of the big three labs and now work at the Chinese labs and are talking about how to train 1T+ models.

BoredomIsFun|13 hours ago

Certainly not Opus. That beast feels very heavy - the coherence of longer form prose is usually a good marker, and it is able to spit 4000 words coherent short stories from a single shot.

NamlchakKhandro|21 hours ago

> The hardware difference explains runtime performance differences, not task performance.

Yes it does.

827a|20 hours ago

He's running a 35B parameter model. Frontier models are well over a trillion parameters at this point. Parameters = smarts. There are 1T+ open source models (e.g. GLM5), and they're actually getting to the point of being comparable with the closed source models; but you cannot remotely run them on any hardware available to us.

Core speed/count and memory bandwidth determines your performance. Memory size determines your model size which determines your smarts. Broadly speaking.

regularfry|8 hours ago

The architecture is also important: there's a trade-off for MoE. There used to be a rough rule of thumb that a 35bxa3b model would be equivalent in smarts to an 11b dense model, give or take, but that's not been accurate for a while.

BoredomIsFun|13 hours ago

> There are 1T+ open source models (e.g. GLM5),

GLM-5 is ~750B model.

ses1984|1 day ago

Who would have thought ai labs with billions upon billions of r&d budget would have better models than a free alternative.