top | item 47189117

(no title)

reactordev | 1 day ago

I run local models on Mac studios and they are more than capable. Don’t spread fud.

discuss

order

xpe|47 minutes ago

My take on the parent (^) and grandparent (^^):

>> Local AI right now is a toy in comparison.

Charitable interpretation: Local AI (unclear; maybe gpt-oss-120b) isn't nearly as good as SoTA (unstated; perhaps Claude Opus 4.6). Unstated use case(s).

> I run local models on Mac studios and they are more than capable. Don't spread fud.

Charitable interpretation: On their Mac studio (could be a cluster or single machine: unclear), local models (unclear; maybe gpt-oss-120b, maybe not) are capable for their needs. Unstated use case(s). / The "Don't spread fud." advocates for accurate information, which is a useful goal in general. However, it was uncharitable and brusque. An alternative approach would have been to ask a clarification question.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. - HN Guidelines

I promise I wrote this by hand. If you confidently thought otherwise, then I would kindly ask you to read my about page.

bottlepalm|1 day ago

You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.

3836293648|1 day ago

You may be correct about the level of models you can actually run on consumer hardware, but it's not fud and you're being needlessly aggressive here.

CamperBob2|1 day ago

Incorrect as of a couple of days ago, when Qwen 3.5 came out. It's a GPT 5-class model that you can run at full strength on a small DGX Spark or Mac cluster, and it still works pretty well after quantization.