(no title)
cg5280
|
6 months ago
Hopefully we see enough efficiency gains over time that this is true. The models I can run on my (expensive) local hardware are pretty terrible compared to the free models provided by Big LLM. I would hate to be chained to hardware I can't afford forever.
aDyslecticCrow|6 months ago
Distillation for specialisation can also raise the capacity of the local models if we need it for specific things.
So its chugging along nicely.