(no title)
zargon
|
9 days ago
A large model (100B+, the more the better) may be acceptable at 2-bit quantization, depending on the task. But not a small model. Especially not for technical tasks. On top of that, one still needs room for OS, software and KV cache. 8GB is just not very useful for local LLMs. That said, it can still be entertaining to try out a 4-bit 8B model for the fun of it.
zozbot234|8 days ago
zargon|8 days ago
Once you're swapping from disk, the performance will be quite unusable for most people. And for local inference, KV cache is the worst possible choice to put on disk.