top | item 46529654

(no title)

cwoolfe | 1 month ago

How can I use ByteShape to run LLMs faster on my 32GB MacBook M1 Max? Or has Ollama already optimized that?

discuss

order

nunodonato|1 month ago

don't use ollama. llama.cpp is better because ollama has an outdated llama.cpp