> our linear transformers are somewhat useless, as the positive impact from the speedup seen in long contexts is undermined by the negative impact of degraded learning.
> In a future post, we will explain how to improve the learning of linear transformers
So the techniques here are useless without special secret sauce that they're not disclosing. Yet. Mamba is already out there solving similar problems, but the more the merrier. I hope they publish the useful part soon.
That's pretty common in academia. You publish something new that is worse than the state-of-the-art. To maintain some semblence of meaning for your work, you you then say that the shortcomings will be addressed in future papers. Often these papers never surface because somewhere along the way it turns out that even though your approach was new, it is fundamentally worse. This kind of stuff happens all the time in research and it only makes it to the surface thanks to this twisted publish or perish world academics now live in.
The point of this post isn’t the linear transformer algorithm. They’re surveying a variety of Linear transformers and showing a general form in order to talk at large about their performance characteristics.
I don't understand something, why do they claim they go from O(N*N) to O(N), but all they claim they are doing is removing one exponentiation operation, which is O(1)? Where is the O(N) they are removing?
Removing the exponential allows some linear algebra based tricks. It makes the state space linear. Linearity allows a kind of running sum, where the state space at time T is quickly computable from the state space at time T-1.
That linearity model simplification has model expressiveness costs, which is why they don't fit the training data as well.
It's described explicitly in section 1 where they first reduce to a linear relationship and then recognize that a portion of the formula can be captured in a state variable, and rewrite as a recurrence relation.
By persisting the state variable across subsequent computations they transform the quadratic formula for computing output into a linear formula computing output and next state from current state.
It's kind of like memoization, but since it's a number it's constant space too.
You should check out MoE-Mamba (https://arxiv.org/abs/2401.04081), it's faster and more accurate than Transformer-MoE. Of course only time will tell if it's better when scaled up further than the paper goes.
Great writeup and interesting experiments. I can’t help but wonder what would happen in you instead made a rectified linear attention. Is that even possible?
[+] [-] modeless|2 years ago|reply
> In a future post, we will explain how to improve the learning of linear transformers
So the techniques here are useless without special secret sauce that they're not disclosing. Yet. Mamba is already out there solving similar problems, but the more the merrier. I hope they publish the useful part soon.
[+] [-] sigmoid10|2 years ago|reply
[+] [-] 3abiton|2 years ago|reply
[+] [-] SmartestUnknown|2 years ago|reply
(Disclaimer: I am an author on the linked paper)
[+] [-] TTPrograms|2 years ago|reply
Also I note the only thing you have posted before is a link to this paper in particular.
[+] [-] smsx|2 years ago|reply
[+] [-] hacketthfk|2 years ago|reply
[+] [-] robrenaud|2 years ago|reply
That linearity model simplification has model expressiveness costs, which is why they don't fit the training data as well.
[+] [-] duped|2 years ago|reply
By persisting the state variable across subsequent computations they transform the quadratic formula for computing output into a linear formula computing output and next state from current state.
It's kind of like memoization, but since it's a number it's constant space too.
[+] [-] thomasahle|2 years ago|reply
If even heavily optimized, they are still (nearly) no better than normal flash attention up to context length 10^4.
And then you haven't even started to account for the degradation in learning.
Maybe if you're doing 100k attention at inference it starts making sense... But then there are other methods you can start using too.
[+] [-] f_devd|2 years ago|reply
[+] [-] deepsquirrelnet|2 years ago|reply
[+] [-] bbertelsen|2 years ago|reply
[+] [-] Cieplak|2 years ago|reply
[+] [-] JoelEinbinder|2 years ago|reply
[+] [-] mangoo84|2 years ago|reply
[deleted]