HLA: Hadamard Linear Attention

📅 2026-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational complexity of standard attention mechanisms and the limited expressivity of existing linear attention methods, which rely on low-order rational approximations of the softmax function. To overcome these limitations, the authors propose Hadamard Linear Attention (HLA), a novel approach that, for the first time in linear attention, applies nonlinearity after computing query-key pairwise similarities, enabling a higher-order rational approximation of softmax that better balances efficiency and representational capacity. HLA leverages kernel functions, the Hadamard product, and efficient matrix operations, eliminating the need for costly tensor reshaping. Experiments on large-scale video generation using diffusion Transformer models demonstrate that HLA substantially reduces computational overhead while maintaining excellent generation quality.

Technology Category

Application Category

📝 Abstract
The attention mechanism is an important reason for the success of transformers. It relies on computing pairwise relations between tokens. To reduce the high computational cost of standard quadratic attention, linear attention has been proposed as an efficient approximation. It employs kernel functions that are applied independently to the inputs before the pairwise similarities are calculated. That allows for an efficient computational procedure which, however, amounts to a low-degree rational function approximating softmax. We propose Hadamard Linear Attention (HLA). Unlike previous works on linear attention, the nonlinearity in HLA is not applied separately to queries and keys, but, analogously to standard softmax attention, after the pairwise similarities have been computed. It will be shown that the proposed nonlinearity amounts to a higher-degree rational function to approximate softmax. An efficient computational scheme for the proposed method is derived that is similar to that of standard linear attention. In contrast to other approaches, no time-consuming tensor reshaping is necessary to apply the proposed algorithm. The effectiveness of the approach is demonstrated by applying it to a large diffusion transformer model for video generation, an application that involves very large amounts of tokens.
Problem

Research questions and friction points this paper is trying to address.

attention mechanism
linear attention
softmax approximation
computational efficiency
transformer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hadamard Linear Attention
linear attention
softmax approximation
efficient transformer
video generation
🔎 Similar Papers
No similar papers found.