top | item 47201496

(no title)

lm28469 | 1 day ago

> Wonder what am I doing wrong?

You're comparing 100b parameters open models running on a consumer laptop VS private models with at the very least 1t parameters running on racks of bleeding edge professional gpus

Local agentic coding is closer to "shit me the boiler plate for an android app" not "deep research questions", especially on your machine

discuss

order

vlovich123|1 day ago

The hardware difference explains runtime performance differences, not task performance.

Speculation is that the frontier models are all below 200B parameters but a 2x size difference wouldn’t fully explain task performance differences

nl|22 hours ago

> Speculation is that the frontier models are all below 200B parameters

Some versions of some the models are around that size, which you might hit for example with the ChatGPT auto-router.

But the frontier models are all over 1T parameters. Source: watch interview with people who have left one of the big three labs and now work at the Chinese labs and are talking about how to train 1T+ models.

BoredomIsFun|15 hours ago

Certainly not Opus. That beast feels very heavy - the coherence of longer form prose is usually a good marker, and it is able to spit 4000 words coherent short stories from a single shot.

NamlchakKhandro|23 hours ago

> The hardware difference explains runtime performance differences, not task performance.

Yes it does.

827a|22 hours ago

He's running a 35B parameter model. Frontier models are well over a trillion parameters at this point. Parameters = smarts. There are 1T+ open source models (e.g. GLM5), and they're actually getting to the point of being comparable with the closed source models; but you cannot remotely run them on any hardware available to us.

Core speed/count and memory bandwidth determines your performance. Memory size determines your model size which determines your smarts. Broadly speaking.

ses1984|1 day ago

Who would have thought ai labs with billions upon billions of r&d budget would have better models than a free alternative.

shlomo_z|19 hours ago

I'll add, AI Labs put a lot of resources into allowing the AI to search the web.. that makes a big difference

mstaoru|16 hours ago

I use search as well via openwebui + searxng.

delaminator|1 day ago

Looks at the headline: Qwen3.5 122B and 35B models offer Sonnet 4.5 performance on local computers

lm28469|1 day ago

Yes and Devstral 2 24b q4 is supposed to be 90% as good but it can't even reliably write to a file on my machine.

There are the benchmarks, the promises, and what everybody can try at home