top | item 43857088

(no title)

endlessvoid94 | 10 months ago

I've found the local models useful for non-coding tasks, however the 8B parameter models so far have proven lacking enough for coding tasks that I'm waiting another few months for whatever the Moore's law equivalent of LLM power is to catch up. Until then, I'm sticking with Sonnet 3.7.

discuss

order

walthamstow|10 months ago

If you have a 32GB Mac then you should be able to run up to 27B params, I have done so with Google's `gemma3:27b-it-qat`

endlessvoid94|10 months ago

Hm, I've got an M2 air w/ 24GB. Running the 27B model was crawling. Maybe I had something misconfigured.

alkh|10 months ago

How much RAM was it taking during inference?