top | item 43369633

Transformers Without Normalization

260 points| hellollm | 1 year ago |jiachenzhu.github.io | reply

32 comments

order
[+] kouteiheika|1 year ago|reply
If true this is very nice incremental improvement. It looks like it doesn't meaningfully improve the capabilities of the model, but is cheaper to compute than RMSNorm (which essentially all current state of art LLMs use) which means faster/cheaper training.
[+] rryan|1 year ago|reply
RMSNorm is pretty insigificant in terms of the overall compute in a transformer though -- usually the reduction work can be fused with earlier or later operations.
[+] kouteiheika|1 year ago|reply
Okay, I just tried this on my pet transformer training benchmark and the results are very disappointing; it converges much more slowly than just using RMSNorm.

It either needs some significant hyperparameter tuning (besides tweaking alpha, which doesn't seem to do much for me), or some fancier initialization (tried both pytorch default and orthogonal, no difference), or maybe my scalar optimizer doesn't work on it (I have a custom optimizer for scalars which speeds up convergence vs Adam, but for DyT layers it seems to be just as good as Adam), or maybe it only catches up after billions of tokens (which I don't have the budget to test for so long).

[+] joshlk|1 year ago|reply
When using low precision formats like float8 you usually have to upscale the activations to BF16 before normalising. So the normalisation layers are proportionally using more compute when going to lower precision. Replacing these layers would help reduce the compute cost significantly.
[+] qmatch|1 year ago|reply
Need to read the details, but removing the norm can be big. It’s always a pain to make sure that your network is normalized properly when trying new architectures. Likely there will still be other implications of the tanh, since the norm is sometimes solving a conditioning problem, but IMO more alternatives are welcome
[+] blackbear_|1 year ago|reply
And so vanishing gradients are not a thing anymore?
[+] tsurba|1 year ago|reply
Proper initialization of layers keeps gradient magnitudes from vanishing/exploding in deep networks. If you make sure the output of each layer has mean 0, std 1, the gradients will be reasonable as well, for example.

I recommend e.g. the og resnet paper and its follow-up from Kaiming He et al.

For a modern take on RNNs, read https://arxiv.org/abs/2303.06349 by DeepMind.

There essentially the point is that largest eigenvalue (spectral radius) needs to be around 1, meaning repeated applications of a linear transformation doesn’t cause increase or decrease of the activations.

[+] tripplyons|1 year ago|reply
I think ResNet pretty much solved vanishing gradients. As for exploding gradients, that is typically with good parameter initialization and normalization. The paper in question proposes an alternative to normalization.
[+] imjonse|1 year ago|reply
Good question. That was an issue with tanh as activation function, and before residual connections and normalization layers. Tanh as a normalization but with other activations and residual present apparently is ok.
[+] toxik|1 year ago|reply
Transformers learn residuals, as you can see in the figure. y = x + f(x).
[+] Lerc|1 year ago|reply
Is it just me or have they provided graphs of LNinput againt LNoutput when the tanh(a*x) is also followed by a weight and bias.

Surely you would want to compare the output of the LayerNorm without the weight and bias to get an impression on their similarity.

I guess it doesn't matter if the final result works, but I feel like looking at the bit that they are changing in isolation might provide a better insight as to what is happening.

[+] lukah|1 year ago|reply
From their implementation it looks like they’re calculating tanh and then applying a weight and bias
[+] gdiamos|1 year ago|reply
What are the practical implications of this?
[+] gricardo99|1 year ago|reply
from the abstract

  By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning.
[+] adamnemecek|1 year ago|reply

[deleted]

[+] randomNumber7|1 year ago|reply
I'll give you a call when I finished building my tesla tower. This was also unnoticed by the engineering/science communities.
[+] qoez|1 year ago|reply
Don't advertise blatantly like this on HN please