Left: original Transformer block. Right: block with our proposed Dynamic Tanh (DyT) layer.
DyT is a straightforward replacement for commonly used Layer Norm or RMSNorm layers.
Transformers with DyT match or exceed the performance of their normalized counterparts.
Abstract
Normalization layers are ubiquitous in modern neural networks and have long been considered essential.
This work demonstrates that Transformers without normalization can achieve the same or better performance using a remarkably simple technique.
We introduce Dynamic Tanh (DyT), an element-wise operation $$mathrm{DyT}(boldsymbol{x}) = tanh(alpha boldsymbol{x}),$$ as a drop-in replacement for normalization layers in Transformers.
DyT is inspired by the observation that layer normalization in Transformers often produces tanh-like, S-shaped input-output mappings.
By incorporating DyT, Transformers without normalization can match or exceed the performance of their normalized counterparts, mostly without hyperparameter tuning.
We validate the effectiveness of Transformers with DyT across diverse settings, ranging from recognition to generation, supervised to self-supervised learning, and computer vision to language models.
These findings challenge the conventional understanding that normalization layers are indispensable in modern neural networks, and offer new insights into their role in deep networks.
Implementation
DyT module can be implemented in a few lines of PyTorch code:
7 Comments
gdiamos
What are the practical implications of this?
kouteiheika
If true this is very nice incremental improvement. It looks like it doesn't meaningfully improve the capabilities of the model, but is cheaper to compute than RMSNorm (which essentially all current state of art LLMs use) which means faster/cheaper training.
adamnemecek
It feels like the end goal of this is energy-based models, Yann LeCun's favorite ML approach.
We at Traceoid http://traceoid.ai have identified a promising approach for scaling EBMs. Join the discord channel https://discord.com/invite/mr9TAhpyBW
qmatch
Need to read the details, but removing the norm can be big. It’s always a pain to make sure that your network is normalized properly when trying new architectures. Likely there will still be other implications of the tanh, since the norm is sometimes solving a conditioning problem, but IMO more alternatives are welcome
Lerc
Is it just me or have they provided graphs of LNinput againt LNoutput when the tanh(a*x) is also followed by a weight and bias.
Surely you would want to compare the output of the LayerNorm without the weight and bias to get an impression on their similarity.
I guess it doesn't matter if the final result works, but I feel like looking at the bit that they are changing in isolation might provide a better insight as to what is happening.
blackbear_
And so vanishing gradients are not a thing anymore?
joshlk
When using low precision formats like float8 you usually have to upscale the activations to BF16 before normalising. So the normalisation layers are proportionally using more compute when going to lower precision. Replacing these layers would help reduce the compute cost significantly.