top | item 43370299

(no title)

gricardo99 | 11 months ago

from the abstract

  By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning.

discuss

order

gdiamos|11 months ago

Sure, but why would one prefer tanh instead of normalization layers if they have the same accuracy?

I suppose normalization kernels have reductions in them, but how hard are reductions in 2025?