WingNews logo WingNews
top | new | best | ask | show | jobs
top | item 43920866

(no title)

reichardt | 9 months ago

With around 4.6 GiB model size the new Qwen3-8B quantized to 4-bit should fit comfortably in 16 GiB of memory: https://huggingface.co/mlx-community/Qwen3-8B-4bit

discuss

order

No comments yet.

powered by hn/api // news.ycombinator.com