top | item 42896812

(no title)

RandomBK | 1 year ago

Reminder: DeepSeek distilled models are better thought of as fine-tunes of Qwen/Llama using DeepSeek output, and are not the same as actual DeepSeek v3 or R1.

This unfortunate naming has sown plenty of confusion around DeepSeek's quality and resource requirements. Actual DeepSeek v3/R1 continues to require at least ~100GB of VRAM/Mem/SSD, and this does not change that.

discuss

order

bestouff|1 year ago

Out of curiosity, would an A100 80GB work for this ?

bestouff|1 year ago

Replying to myself: apparently it's not 100GB VRAM but more around 700GB VRAM that's needed to run DeepSeek R1. The gear needed to run that would cost something in the vincinity of 100K€ !

darthrupert|1 year ago

Wait, what am I running on my 32GB Macbook then? I thought it was the 32b version of deepseek-r1.

RandomBK|1 year ago

The only 32B distill I'm aware of is `DeepSeek-R1-Distill-Qwen-32B`, which would be a base model of `Qwen-32B` distilled (further trained) on outputs from the full R1 model.

rahimnathwani|1 year ago

Deepseek R1 has 671 billion parameters. Even if you could quantize each parameter to just 1 bit (from 8 bits), you'd still need 84GB of RAM just for the weights. There is no 32B parameter version of the V3/R1 model architecture.

Plankaluel|1 year ago

You are running Qwen2.5 32b that has been fine tuned on data that was generated by R1