top | item 38478331

(no title)

dnnssl2 | 2 years ago

Is this still the case for sliding window attention/streaming LLMs, where you have a fixed length attention window rather than infinitely passing in new tokens for quadratic scaling? You even get better performance due to purposely downsampling non-meaningful attention sink tokens.

discuss

order

chillee|2 years ago

I cover it a bit in the blog post, but unless you have a really long context length (like 32k+), your primary computational cost doesn't come from attention but rather from loading your weights from VRAM into registers.

I mean, practically speaking, completions from say, ChatGPT or Claude take seconds to finish :)