top | item 47157544

(no title)

rushcar | 4 days ago

What model are you running with 64GB of VRAM (equivalent)? I doubt most users are doing that. Looking at their documentation, the default path for openclaw seems to be a 3P API for the model.

discuss

order

lwhi|2 days ago

It doesn't matter what 'most users' are doing.

On a 64 GB Apple silicon Mac mini you can natively host mid sized and some larger quantised local models .. using Ollama.

For example:

Qwen3-Coder (32B), GLM-4.7 (or GLM-4 Variants), Devstral-24B / Mistral Large (Quantized)