(no title)
lwhi | 2 days ago
On a 64 GB Apple silicon Mac mini you can natively host mid sized and some larger quantised local models .. using Ollama.
For example:
Qwen3-Coder (32B), GLM-4.7 (or GLM-4 Variants), Devstral-24B / Mistral Large (Quantized)
No comments yet.