top | item 39965771

(no title)

aiddun | 1 year ago

Speculative decoding does this to an extent - using a smaller model to generate its own predictions and putting them in the batch of the bigger model until they diverge

https://huggingface.co/blog/whisper-speculative-decoding

discuss

order

brrrrrm|1 year ago

It doesn’t. It simply trades compute efficiency by transposing matrix multiplications into “the future.” It doesn’t actually save FLOPs (uses more) and doesn’t work at large batch size

imtringued|1 year ago

>doesn’t actually save FLOPs (uses more)

Does anyone even care? Really, who cares? The truth is nobody cares. Saving FLOPs does nothing if you have to load the entire model anyway. Going from two flops per parameter to 0.5 or whatever might sound cool on paper but you're loading those parameters anyway and gained nothing.