top | item 45369610

(no title)

mhast | 5 months ago

It depends on the model.

It's typically ok for MoE models but if you try to run something non-MoE the speed will plummet. In that same thread there are people getting 50 tok/s on MoE models and 5 on non MoE. (https://www.reddit.com/r/LocalLLaMA/comments/1n79udw/comment...)

And while it has unified memory the memory is quite slow. 250GB/s compared to 500+ for M4 Max or 1800 GB/s for a 5090. So it's fast for a CPU, but pretty slow for a GPU.

(That said, there are not a lot of cheap options for running large models locally. They all have significant compromises.)

discuss

order

No comments yet.