top | item 42861080

DeepSeek R1 671B running on 2 M2 Ultras faster than reading speed

96 points| thyrox | 1 year ago |twitter.com

29 comments

order

mythz|1 year ago

Someone also got the full Q8 R1 running on a $6K PC without a GPU on 2x EPYC with 768GB DDR5 RAM running at 6-8 tok/s [1].

Will be interesting to see the value/performance compared to next gen M4 Ultra's (or Extreme?) vs NVIDIA's new DIGITS [2] when they're released.

[1] https://x.com/carrigmat/status/1884244369907278106

[2] https://www.nvidia.com/en-us/project-digits/

mrcwinn|1 year ago

Digits will be $3k and have 128GB of unified memory, so don't we already know that it wouldn't compare well this this rig? 128 won't be enough to fit the model in memory.

As for Apple, we'll see.

rahimnathwani|1 year ago

Wow!

6 to 8 tokens per second.

And less than a tenth of the cost of a GPU setup.

phonon|1 year ago

Nice! Xeon 6 using AMX-BF16/INT8 Instructions should be something like 5 times faster than that....

danans|1 year ago

Check out the power draw metrics. Following the CPU+GPU power consumption, it seems like it averaged 22W for about a minute. Unless I'm missing something, the inference for this example consumed at most .0004 kWh.

That's almost nothing. If these models are capable/functional enough for most day-to-day uses, then useful LLM-based GenAI is already at the "too cheap to meter" stage.

danans|1 year ago

So it seems like this was actually 7 M2 Ultras, not 2, so .0028 kWh?

teruakohatu|1 year ago

I am amazed mlx-lm/mlx.distributed works that well on prosumer hardware.

I don't think they specified what they were using for networking, but it was probably Thunderbolt/USB4 networking which can reach 40Gbps.

shihab|1 year ago

Please note that it’s using pretty aggressive quantization (around 4 bits per weight)

doctoboggan|1 year ago

Its not that aggressive of a quantization considering that the full model was trained at only 8 bits.

rashidae|1 year ago

This is amazing!! What kind of applications are you considering for this? A part from saving variable costs, fine tuning extensively and security… I’m curious to evaluate this in a financial perspective, as variable costs can be daunting, but not too much “yet”.

I’m hoping NVIDIA comes up with their new consumer computer soon!

iFred|1 year ago

Complete aside, but I think this is the first time I’ve seen Apple’s internal DNS outside of Apple.

CharlesW|1 year ago

scv = Santa Clara Valley

creativenolo|1 year ago

How is this split between two computers?

DrNosferatu|1 year ago

Heavily quantized…

Still interesting though.

mrcwinn|1 year ago

Fascinating to read the thinking process of a flush vs a straight in poker. It's circular nonsense that is not at all grounded in reason — it's grounded in the factual memory of the rules of Poker, repeated over and over as it continues to doubt itself and double-check. What nonsense!

How many additional nuclear power plants will need to be built because even these incredibly technical achievements are, under the hood, morons? XD

talldayo|1 year ago

[deleted]

epistasis|1 year ago

That's a hell of a lot cheaper than running the equivalent H100 at home...

And cheaper than a lot hobbyists' bicycles!

mrbungie|1 year ago

It is a 671B params model, but I guess you already knew that from the title.

But you're right, just let's keep waiting for the town-sized Data Centers + Power Plants kindly served by our big tech overlords.

PS: If you refer to it being a Mac, obviously you can build a more cost efficient but difficult to cool rig.

sitkack|1 year ago

14k in 2025 is about 6400 in 1992.

cma|1 year ago

There are lots of people with ATVs and hobby motorbikes etc. that cost a good bit more.