top | item 44825681

(no title)

cg5280 | 6 months ago

Hopefully we see enough efficiency gains over time that this is true. The models I can run on my (expensive) local hardware are pretty terrible compared to the free models provided by Big LLM. I would hate to be chained to hardware I can't afford forever.

discuss

order

aDyslecticCrow|6 months ago

The breakthrough of diffusion for tolken generation bumped down compute alot. But there are no local open sources versions yet.

Distillation for specialisation can also raise the capacity of the local models if we need it for specific things.

So its chugging along nicely.