HAAN: A Holistic Approach for Accelerating Normalization Operations in Large Language Models

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high latency and energy consumption induced by LayerNorm operations in large language models (LLMs), this paper proposes HAAN, a holistic hardware–software co-designed acceleration framework. HAAN introduces a unified optimization framework that jointly restructures computation flow, memory access patterns, and hardware mapping—eliminating redundant computations and memory bottlenecks without accuracy degradation. Key techniques include custom low-precision operator fusion, dataflow reordering, on-chip cache-aware scheduling, and dedicated hardware unit design. Experimental evaluation demonstrates that HAAN reduces end-to-end inference latency by 42% and improves energy efficiency by 3.1× over state-of-the-art methods, while enabling seamless integration into mainstream LLM inference engines.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have revolutionized natural language processing (NLP) tasks by achieving state-of-the-art performance across a range of benchmarks. Central to the success of these models is the integration of sophisticated architectural components aimed at improving training stability, convergence speed, and generalization capabilities. Among these components, normalization operation, such as layer normalization (LayerNorm), emerges as a pivotal technique, offering substantial benefits to the overall model performance. However, previous studies have indicated that normalization operations can substantially elevate processing latency and energy usage. In this work, we adopt the principles of algorithm and hardware co-design, introducing a holistic normalization accelerating method named HAAN. The evaluation results demonstrate that HAAN can achieve significantly better hardware performance compared to state-of-the-art solutions.
Problem

Research questions and friction points this paper is trying to address.

Accelerates normalization operations
Reduces processing latency
Optimizes energy usage in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

algorithm and hardware co-design
holistic normalization accelerating method
improved hardware performance
🔎 Similar Papers
No similar papers found.
T
Tianfan Peng
Tandon School of Engineering, New York University, New York, USA; Shenzhen Institute of Information Technology, Shenzhen, China
Jiajun Qin
Jiajun Qin
Zhejiang University
Computer Architecture
Tianhua Xia
Tianhua Xia
New York University
Computer Architecture
Sai Qian Zhang
Sai Qian Zhang
New York University