top | item 47189810

(no title)

txrx0000 | 1 day ago

Scaling has hit a wall and will not get us to AGI. Open-source models are only a couple of months behind closed models, and the same level of capability will require smaller and smaller models in the future. This is where open research can help: make the models smaller ASAP. I think it's likely that we'll be able to get something human-level to run on a single 16GB GPU before the end of the decade.

discuss

order

Tade0|1 day ago

> Scaling has hit a wall and will not get us to AGI.

That was never the aim. LLMs are not designed to be generally intelligent, just to be really good at producing believable text.

tbrownaw|1 day ago

> human-level to run on a single 16GB GPU before the end of the decade.

That's apparently about 6k books' worth of data.

txrx0000|1 day ago

For the weights and temporary state, yes. It doesn't sound like a lot until you remember that your DNA is about 600 books worth of data by the same metric.

octoberfranklin|1 day ago

How many humans do you know who can recite 6000 books, word for word, exactly?

drdaeman|1 day ago

> Open-source models are only a couple of months behind closed models

Oh, come on, surely not just a couple months.

Benchmarks may boast some fancy numbers, but I just tried to save some money by trying out Qwen3-Next 80B and Qwen3.5 35B-A3B (since I've recently got a machine that can run those at a tolerable speed) to generate some documentation from a messy legacy codebase. It was nowhere close neither in the output quality nor in performance to any current models that the SaaS LLM behemoth corps offer. Just an anecdote, of course, but that's all I have.