top | item 37560963

(no title)

sumo43 | 2 years ago

For training you would need more memory. As for the pooling, Theoretically yes but wouldn't latency play as much, if not a greater part in the response time here? Imagine a tensor-parallel gather where the other nodes are in different parts of the country.

Here I'm assuming that Petal uses a large number of small, heterogenous nodes like consumer gpus. It might as well be something much simpler.

discuss

order

brucethemoose2|2 years ago

> Theoretically yes but wouldn't latency play as much, if not a greater part in the response time here?

For inference? Yeah, but its still better than nothing if your hardware can't run the full model, or run it extremely slowly.

I think frameworks like MLC-LLM and llama.cpp kinda throw a wrench in this though, as you can get very acceptable throughput on an IGP or split across a CPU/dGPU, without that huge networking penalty. And pooling complete hosts (like AI Horde) is much cheaper.

I'm not sure what the training requirements are, but ultimately throughput is all that matters for training, especially if you can "buy" training time with otherwise idle GPU time.