top | item 46529654 (no title) cwoolfe | 1 month ago How can I use ByteShape to run LLMs faster on my 32GB MacBook M1 Max? Or has Ollama already optimized that? discuss order hn newest nunodonato|1 month ago don't use ollama. llama.cpp is better because ollama has an outdated llama.cpp
nunodonato|1 month ago don't use ollama. llama.cpp is better because ollama has an outdated llama.cpp
nunodonato|1 month ago