top | item 46321043

(no title)

VHRanger | 2 months ago

In general encoder+decoder models are much more efficient at infererence than decoder-only models because they run over the entire input all at once (which leverages parallel compute more effectively).

The issue is that they're generally harder to train (need input/output pairs as a training dataset) and don't naturally generalize as well

discuss

order

GaggiX|2 months ago

≥In general encoder+decoder models are much more efficient at infererence than decoder-only models because they run over the entire input all at once (which leverages parallel compute more effectively).

Decoder-only models also do this, the only difference is that they use a masked attention.