A3D-MoE: Acceleration of Large Language Models with Mixture of Experts via 3D Heterogeneous Integration

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low hardware utilization, high HBM bandwidth pressure, and elevated latency in MoE large language model inference, this paper proposes a software-hardware co-designed acceleration system based on 3D heterogeneous integration. Its core contributions are: (1) a 3D adaptive systolic array with a unified 3D dataflow, enabling dynamic load balancing between GEMV and GEMM operations; (2) a fused scheduling mechanism for attention and MoE computation, minimizing memory access and scheduling overhead; and (3) an odd-even expert placement strategy combined with V-Cache-based data reuse to reduce HBM access frequency. Experimental results demonstrate that, compared to the state-of-the-art, the system achieves 1.8–2× lower latency, 2–4× lower energy consumption, and 1.44–1.8× higher throughput—significantly improving both the efficiency and performance of MoE model inference.

Technology Category

Application Category

📝 Abstract
Conventional large language models (LLMs) are equipped with dozens of GB to TB of model parameters, making inference highly energy-intensive and costly as all the weights need to be loaded to onboard processing elements during computation. Recently, the Mixture-of-Experts (MoE) architecture has emerged as an efficient alternative, promising efficient inference with less activated weights per token. Nevertheless, fine-grained MoE-based LLMs face several challenges: 1) Variable workloads during runtime create arbitrary GEMV-GEMM ratios that reduce hardware utilization, 2) Traditional MoE-based scheduling for LLM serving cannot fuse attention operations with MoE operations, leading to increased latency and decreased hardware utilization, and 3) Despite being more efficient than conventional LLMs, loading experts from DRAM still consumes significant energy and requires substantial DRAM bandwidth. Addressing these challenges, we propose: 1) A3D-MoE, a 3D Heterogeneous Integration system that employs state-of-the-art vertical integration technology to significantly enhance memory bandwidth while reducing Network-on-Chip (NoC) overhead and energy consumption. 2) A 3D-Adaptive GEMV-GEMM-ratio systolic array with V-Cache efficient data reuse and a novel unified 3D dataflow to solve the problem of reduced hardware utilization caused by arbitrary GEMV-GEMM ratios from different workloads, 3) A Hardware resource-aware operation fusion scheduler that fuses attention operations with MoE operations to enhance hardware performance, and 4) MoE Score-Aware HBM access reduction with even-odd expert placement that reduces DRAM access and bandwidth requirements. Our evaluation results indicate that A3D-MoE delivers significant performance enhancements, reducing latency by a factor of 1.8x to 2x and energy consumption by 2x to 4x, while improving throughput by 1.44x to 1.8x compared to the state-of-the-art.
Problem

Research questions and friction points this paper is trying to address.

Reducing energy and cost in large language model inference
Improving hardware utilization with variable GEMV-GEMM ratios
Minimizing DRAM access and bandwidth for MoE-based LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D Heterogeneous Integration for enhanced bandwidth
3D-Adaptive GEMV-GEMM-ratio systolic array
Hardware-aware fusion of attention and MoE operations
🔎 Similar Papers
No similar papers found.
W
Wei-Hsing Huang
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA
J
Janak Sharda
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA
Cheng-Jhih Shih
Cheng-Jhih Shih
School of Computer Science, Georgia Institute of Technology, Atlanta, GA 30332 USA
Y
Yuyao Kong
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA
Faaiq Waqar
Faaiq Waqar
Ph.D. Student at the Georgia Institute of Technology
Integrated CircuitsAmorphous Oxide TransistorsAI HardwareNanotechnologyNanoporous Materials
P
Pin-Jun Chen
School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA
Yingyan (Celine) Lin
Yingyan (Celine) Lin
Associate Professor, Georgia Institute of Technology
Efficient AI algorithmsDeep learning acceleratorsGreen AI
Shimeng Yu
Shimeng Yu
Georgia Institute of Technology, Dean's Professor
Non-volatile MemoryRRAMFerroelectric MemoriesIn-Memory ComputingAI Hardware