top | item 47145367 (no title) lwhi | 5 days ago I disagree. discuss order hn newest rushcar|4 days ago What model are you running with 64GB of VRAM (equivalent)? I doubt most users are doing that. Looking at their documentation, the default path for openclaw seems to be a 3P API for the model. lwhi|3 days ago It doesn't matter what 'most users' are doing.On a 64 GB Apple silicon Mac mini you can natively host mid sized and some larger quantised local models .. using Ollama.For example:Qwen3-Coder (32B), GLM-4.7 (or GLM-4 Variants), Devstral-24B / Mistral Large (Quantized)
rushcar|4 days ago What model are you running with 64GB of VRAM (equivalent)? I doubt most users are doing that. Looking at their documentation, the default path for openclaw seems to be a 3P API for the model. lwhi|3 days ago It doesn't matter what 'most users' are doing.On a 64 GB Apple silicon Mac mini you can natively host mid sized and some larger quantised local models .. using Ollama.For example:Qwen3-Coder (32B), GLM-4.7 (or GLM-4 Variants), Devstral-24B / Mistral Large (Quantized)
lwhi|3 days ago It doesn't matter what 'most users' are doing.On a 64 GB Apple silicon Mac mini you can natively host mid sized and some larger quantised local models .. using Ollama.For example:Qwen3-Coder (32B), GLM-4.7 (or GLM-4 Variants), Devstral-24B / Mistral Large (Quantized)
rushcar|4 days ago
lwhi|3 days ago
On a 64 GB Apple silicon Mac mini you can natively host mid sized and some larger quantised local models .. using Ollama.
For example:
Qwen3-Coder (32B), GLM-4.7 (or GLM-4 Variants), Devstral-24B / Mistral Large (Quantized)