top | item 47189011

(no title)

bottlepalm | 2 days ago

What use are weights without the hardware to run them? That's the gate. Local AI right now is a toy in comparison.

Nukes are actually a great example of something also gated by resources. Just having the knowledge/plans isn't good enough.

discuss

order

txrx0000|2 days ago

Scaling has hit a wall and will not get us to AGI. Open-source models are only a couple of months behind closed models, and the same level of capability will require smaller and smaller models in the future. This is where open research can help: make the models smaller ASAP. I think it's likely that we'll be able to get something human-level to run on a single 16GB GPU before the end of the decade.

Tade0|1 day ago

> Scaling has hit a wall and will not get us to AGI.

That was never the aim. LLMs are not designed to be generally intelligent, just to be really good at producing believable text.

tbrownaw|2 days ago

> human-level to run on a single 16GB GPU before the end of the decade.

That's apparently about 6k books' worth of data.

drdaeman|2 days ago

> Open-source models are only a couple of months behind closed models

Oh, come on, surely not just a couple months.

Benchmarks may boast some fancy numbers, but I just tried to save some money by trying out Qwen3-Next 80B and Qwen3.5 35B-A3B (since I've recently got a machine that can run those at a tolerable speed) to generate some documentation from a messy legacy codebase. It was nowhere close neither in the output quality nor in performance to any current models that the SaaS LLM behemoth corps offer. Just an anecdote, of course, but that's all I have.

fooker|2 days ago

> hardware to run them

Costs a few hundred thousand per server, it's a huge expense if you want it at your home but a rounding error for most organizations.

bottlepalm|2 days ago

You're buying what exactly for a few hundred thousand? and running what model on it? to support how many users? at what tps?

reactordev|2 days ago

I run local models on Mac studios and they are more than capable. Don’t spread fud.

xpe|8 hours ago

My take on the parent (^) and grandparent (^^):

>> Local AI right now is a toy in comparison.

Charitable interpretation: Local AI (unclear; maybe gpt-oss-120b) isn't nearly as good as SoTA (unstated; perhaps Claude Opus 4.6). Unstated use case(s).

> I run local models on Mac studios and they are more than capable. Don't spread fud.

Charitable interpretation: On their Mac studio (could be a cluster or single machine: unclear), local models (unclear; maybe gpt-oss-120b, maybe not) are capable for their needs. Unstated use case(s). / The "Don't spread fud." advocates for accurate information, which is a useful goal in general. However, it was uncharitable and brusque. An alternative approach would have been to ask a clarification question.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. - HN Guidelines

I promise I wrote this by hand. If you confidently thought otherwise, then I would kindly ask you to read my about page.

bottlepalm|2 days ago

You're spreading fud. There's nothing you can run locally that's on par with the speed/intelligence of a SOTA model.