top | item 42896998 (no title) RandomBK | 1 year ago The only 32B distill I'm aware of is `DeepSeek-R1-Distill-Qwen-32B`, which would be a base model of `Qwen-32B` distilled (further trained) on outputs from the full R1 model. discuss order hn newest rahimnathwani|1 year ago That model's weights are around 64GB: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-...GP is likely running the 4-bit quantized version of the finetuned Qwen model.
rahimnathwani|1 year ago That model's weights are around 64GB: https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-...GP is likely running the 4-bit quantized version of the finetuned Qwen model.
rahimnathwani|1 year ago
GP is likely running the 4-bit quantized version of the finetuned Qwen model.