Skip to content Skip to footer

Transformers Without Normalization by hellollm

7 Comments

  • Post Author
    gdiamos
    Posted March 15, 2025 at 5:46 am

    What are the practical implications of this?

  • Post Author
    kouteiheika
    Posted March 15, 2025 at 5:50 am

    If true this is very nice incremental improvement. It looks like it doesn't meaningfully improve the capabilities of the model, but is cheaper to compute than RMSNorm (which essentially all current state of art LLMs use) which means faster/cheaper training.

  • Post Author
    adamnemecek
    Posted March 15, 2025 at 5:53 am

    It feels like the end goal of this is energy-based models, Yann LeCun's favorite ML approach.

    We at Traceoid http://traceoid.ai have identified a promising approach for scaling EBMs. Join the discord channel https://discord.com/invite/mr9TAhpyBW

  • Post Author
    qmatch
    Posted March 15, 2025 at 6:10 am

    Need to read the details, but removing the norm can be big. It’s always a pain to make sure that your network is normalized properly when trying new architectures. Likely there will still be other implications of the tanh, since the norm is sometimes solving a conditioning problem, but IMO more alternatives are welcome

  • Post Author
    Lerc
    Posted March 15, 2025 at 6:34 am

    Is it just me or have they provided graphs of LNinput againt LNoutput when the tanh(a*x) is also followed by a weight and bias.

    Surely you would want to compare the output of the LayerNorm without the weight and bias to get an impression on their similarity.

    I guess it doesn't matter if the final result works, but I feel like looking at the bit that they are changing in isolation might provide a better insight as to what is happening.

  • Post Author
    blackbear_
    Posted March 15, 2025 at 6:49 am

    And so vanishing gradients are not a thing anymore?

  • Post Author
    joshlk
    Posted March 15, 2025 at 7:36 am

    When using low precision formats like float8 you usually have to upscale the activations to BF16 before normalising. So the normalisation layers are proportionally using more compute when going to lower precision. Replacing these layers would help reduce the compute cost significantly.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.