top | item 40359384

(no title)

mendigou | 1 year ago

I haven't touched this in a while, but you can train NNs in a distributed fashion and what GP described is roughly the most basic version of model parallelism, where there is a copy of the model on each node, each node receives a batch of data, and the gradients get synchronized after each batch (so they again start from the same point like you mention).

Most modern large models cannot be trained on one instance of anything (GPU, accelerators, whatever), so there's no alternative to distributed training. They also wouldn't even fit in the memory of one GPU/accelerator, so there are even more complex ways to split the model across instances.

discuss

order

mirekrusin|1 year ago

And their bottleneck is what? Data transfer. State is gigantic and needs to be frequently synchronized. That's why it can only work with sophisticated, ultra high bandwidth, specialized interconnects. They employ some tricks here and there but they don't scale that well, ie. with MoE you get factor of 8 scaling and it comes at a cost of lower overall number of parameters. They of course do parallelism as much as they can at model/data/pipeline levels but it's a struggle in a setting of fastest interconnects there are on the planet. Those techniques don't transfer onto networks normal people are using, using "distrubuted" phrase to describe both is conflating those two settings with dramatically different properties. It's a bit like saying that you could make L1 or L2 cpu cache bigger by connecting multiple cpus with network cable. It doesn't work like that.

You can't scale averaging parallel runs much. You need to munch through evolutions/iterations fast.

You can't ie. start with random state, schedule parallel training averaging it all out and expect that you end up with well trained network in one step.

Every next step invalidates input state for everything and the state is gigantic.

It's dominated by huge transfers at high frequency.

You can't for example have 2x gpus connected with network cable and expect speedup. You need to put them on the same motherboard to have any gains.

SETI for example is unlike that - it can be easily distributed - partial readonly snapshot, intense computation, thin result submission.

mendigou|1 year ago

Not disputing all of that, but telling the GP flat out "no" is incorrect, especially when distributed training and inference are the only way to run modern massive models.