H2EAL: Hybrid-Bonding Architecture with Hybrid Sparse Attention for Efficient Long-Context LLM Inference

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high latency and energy consumption bottlenecks in deploying large language models (LLMs) on edge devices—particularly due to key-value (KV) cache overhead in long-context scenarios—this work proposes an algorithm-hardware co-optimization framework. It introduces a static-dynamic hybrid sparse attention mechanism, synergistically integrated with a memory-compute colocation architecture leveraging hybrid bonding, near-memory computing units, and a load-balancing scheduler. This design alleviates distributed memory bandwidth pressure and enhances computational resource utilization. Experimental results demonstrate 5.20–48.21× inference speedup and 6.22–73.48× energy efficiency improvement over baseline approaches, with only 0.87% average accuracy degradation. The framework significantly advances practical, efficient long-context LLM inference at the edge.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable proficiency in a wide range of natural language processing applications. However, the high energy and latency overhead induced by the KV cache limits the edge deployment, especially for long contexts. Emerging hybrid bonding (HB) technology has been proposed as a promising alternative to conventional near-memory processing (NMP) architectures, offering improved bandwidth efficiency and lower power consumption while exhibiting characteristics of distributed memory. In this paper, we propose H2EAL, a hybrid bonding-based accelerator with sparse attention algorithm-hardware co-design for efficient LLM inference at the edge. At the algorithm level, we propose a hybrid sparse attention scheme with static and dynamic sparsity for different heads to fully leverage the sparsity with high accuracy. At the hardware level, we co-design the hardware to support hybrid sparse attention and propose memory-compute co-placement to address the distributed memory bottleneck. Since different attention heads exhibit different sparse patterns and the attention structure often mismatches the HB architecture, we further develop a load-balancing scheduler with parallel tiled attention to address workload imbalance and optimize the mapping strategy. Extensive experiments demonstrate H2EAL achieves 5.20~48.21x speedup and 6.22~73.48x energy efficiency improvement over baseline HB implementation, with a negligible average accuracy drop of 0.87% on multiple benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Reducing KV cache energy and latency for edge LLM deployment
Leveraging hybrid bonding technology for efficient memory distribution
Optimizing sparse attention patterns across heads for accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid sparse attention for accuracy
Memory-compute co-placement for efficiency
Load-balancing scheduler for optimization
🔎 Similar Papers
No similar papers found.
Z
Zizhuo Fu
Institute for Artificial Intelligence, School of Integrated Circuits, Peking University, Beijing, China
X
Xiaotian Guo
School of Integrated Circuits, Peking University, Beijing, China
Wenxuan Zeng
Wenxuan Zeng
Peking University
Efficient Deep LearningLarge Language Model
Shuzhang Zhong
Shuzhang Zhong
Peking University
Machine Learning System
Y
Yadong Zhang
Nano Core Chip Electronic Technology, Hangzhou, China
Peiyu Chen
Peiyu Chen
Nano Core Chip Electronic Technology, Hangzhou, China
R
Runsheng Wang
School of Integrated Circuits, Peking University, Beijing, China
L
Le Ye
Advanced Institute of Information Technology of Peking University, Hangzhou, China
M
Meng Li
Institute for Artificial Intelligence, School of Integrated Circuits, Peking University, Beijing, China