They show a test-run of a 1B llama-3.2 model. Doesn't that fit in a single mac? Distributing the workload in this case must be slower than running it on a single machine.
However, this is interesting and I'm confused why aren't they showcasing the test-run of a larger model that actually necessitates distributing the workload across the cluster.
morphle|1 year ago
[1] https://www.youtube.com/watch?v=GBR6pHZ68Ho
menaerus|1 year ago
However, this is interesting and I'm confused why aren't they showcasing the test-run of a larger model that actually necessitates distributing the workload across the cluster.