top | item 38478192

(no title)

dnnssl2 | 2 years ago

If you were to serve this on a datacenter server, is the client to server roundtrip networking the slowest part of the inference? Curious if it would be faster to run this cloud GPUs on better hardware but farther compute, or locally with worse hardware.

discuss

order

chillee|2 years ago

Surprisingly, no. And part of this is that text generation is really expensive. Unlike traditional ML inference (like with, resnets), you don't just pass your data through your model once. You need to pass it over and over again (once for each token you generate).

So, in practice, a full "text completion request" can often take on the order of seconds, which dwarfs the client <-> server roundtrip.

dnnssl2|2 years ago

Is this still the case for sliding window attention/streaming LLMs, where you have a fixed length attention window rather than infinitely passing in new tokens for quadratic scaling? You even get better performance due to purposely downsampling non-meaningful attention sink tokens.