top | item 43920866 (no title) reichardt | 9 months ago With around 4.6 GiB model size the new Qwen3-8B quantized to 4-bit should fit comfortably in 16 GiB of memory: https://huggingface.co/mlx-community/Qwen3-8B-4bit discuss order hn newest No comments yet.
No comments yet.