For some workloads, it’s almost all about the VRAM. In those cases I’ve been wondering if getting a high memory M1 or M2 Mac could be a good lab machine thanks to unified memory. It’ll run more quietly, use significantly less power, no worries about overloading your electric circuit. On a 128 GB RAM Mac Studio you could theoretically run or even train models that otherwise would require multiple $6k A6000 GPUs in custom machine builds taking oodles of power at the plug. It’d be slow but slow beats not possible. And if you need a new development machine anyhow, you can justify some of that beefy Mac Studio’s cost as part of your required spend anyhow. PyTorch has supported “mps” as a target device for some time now.
No comments yet.