🤖 AI Summary
To address the high latency and energy consumption induced by LayerNorm operations in large language models (LLMs), this paper proposes HAAN, a holistic hardware–software co-designed acceleration framework. HAAN introduces a unified optimization framework that jointly restructures computation flow, memory access patterns, and hardware mapping—eliminating redundant computations and memory bottlenecks without accuracy degradation. Key techniques include custom low-precision operator fusion, dataflow reordering, on-chip cache-aware scheduling, and dedicated hardware unit design. Experimental evaluation demonstrates that HAAN reduces end-to-end inference latency by 42% and improves energy efficiency by 3.1× over state-of-the-art methods, while enabling seamless integration into mainstream LLM inference engines.
📝 Abstract
Large language models (LLMs) have revolutionized natural language processing (NLP) tasks by achieving state-of-the-art performance across a range of benchmarks. Central to the success of these models is the integration of sophisticated architectural components aimed at improving training stability, convergence speed, and generalization capabilities. Among these components, normalization operation, such as layer normalization (LayerNorm), emerges as a pivotal technique, offering substantial benefits to the overall model performance. However, previous studies have indicated that normalization operations can substantially elevate processing latency and energy usage. In this work, we adopt the principles of algorithm and hardware co-design, introducing a holistic normalization accelerating method named HAAN. The evaluation results demonstrate that HAAN can achieve significantly better hardware performance compared to state-of-the-art solutions.