Native Hybrid Attention for Efficient Sequence Modeling

šŸ“… 2025-10-08
šŸ“ˆ Citations: 0
✨ Influential: 0
šŸ“„ PDF
šŸ¤– AI Summary
Transformer-based sequence modeling faces two key challenges: high computational complexity (O(n²)) and limited capability in capturing long-range dependencies. To address these, we propose Native Hybrid Attention (NHA), the first architecture unifying linear and full attention *within* and *across* layers. NHA operates in a shared key-value space and employs a single softmax to dynamically weight contributions, integrating sliding-window attention for enhanced local perception and leveraging linear RNNs for efficient long-term state maintenance. Crucially, NHA introduces no additional fusion parameters and smoothly degrades to either pure linear attention or standard attention. Experiments demonstrate that NHA significantly outperforms both vanilla Transformers and existing hybrid models on long-range dependency and commonsense reasoning benchmarks. When substituting attention modules in pretrained large language models, NHA preserves competitive accuracy while substantially accelerating inference.

Technology Category

Application Category

šŸ“ Abstract
Transformers excel at sequence modeling but face quadratic complexity, while linear attention offers improved efficiency but often compromises recall accuracy over long contexts. In this work, we introduce Native Hybrid Attention (NHA), a novel hybrid architecture of linear and full attention that integrates both intra & inter-layer hybridization into a unified layer design. NHA maintains long-term context in key-value slots updated by a linear RNN, and augments them with short-term tokens from a sliding window. A single exttt{softmax attention} operation is then applied over all keys and values, enabling per-token and per-head context-dependent weighting without requiring additional fusion parameters. The inter-layer behavior is controlled through a single hyperparameter, the sliding window size, which allows smooth adjustment between purely linear and full attention while keeping all layers structurally uniform. Experimental results show that NHA surpasses Transformers and other hybrid baselines on recall-intensive and commonsense reasoning tasks. Furthermore, pretrained LLMs can be structurally hybridized with NHA, achieving competitive accuracy while delivering significant efficiency gains. Code is available at https://github.com/JusenD/NHA.
Problem

Research questions and friction points this paper is trying to address.

Addresses quadratic complexity of Transformers in sequence modeling
Improves recall accuracy in long-context scenarios with hybrid attention
Enables efficient structural hybridization for pretrained language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid architecture combining linear and full attention
Maintains long-term context with linear RNN updates
Uses sliding window for short-term token augmentation
šŸ”Ž Similar Papers
No similar papers found.
J
Jusen Du
Tsinghua University
J
Jiaxi Hu
The Hong Kong University of Science and Technology (Guangzhou)
T
Tao Zhang
Tsinghua University
Weigao Sun
Weigao Sun
Research Scientist, Shanghai AI Laboratory
LLMDeep LearningOptimization
Y
Yu Cheng
The Chinese University of Hong Kong