Hybrid Systolic Array Accelerator with Optimized Dataflow for Edge Large Language Model Inference

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Efficient large language model (LLM) inference on edge devices demands simultaneous optimization of prefill-phase energy efficiency and decode-phase area efficiency, while minimizing external memory accesses (EMA). This work proposes a hybrid systolic array architecture that jointly optimizes dataflow, computation, and storage for both prefill and decode phases. Key innovations include MXINT4 weight quantization, a customized dataflow, and hardware-accelerated RMSNorm and RoPE units—enabling near-zero dequantization overhead and 100% hardware utilization, thereby significantly reducing EMA. Evaluated on a 1.3B-parameter model, the design achieves 247 token/s/mm² (prefill, long input) and 117 token/s/mm² (decode, long output), outperforming state-of-the-art accelerators by 2.45× in throughput. It also sets new benchmarks in generation energy efficiency and area efficiency.

Technology Category

Application Category

📝 Abstract
Edge inference for large language models (LLM) offers secure, low-latency, and cost-effective inference solutions. We emphasize that an edge accelerator should achieve high area efficiency and minimize external memory access (EMA) during the memory-bound decode stage, while maintaining high energy efficiency during the compute intensive prefill stage. This paper proposes an edge LLM inference accelerator featuring a hybrid systolic array (HSA) architecture that optimizes inference efficiency in both stages. To further reduce EMA, we adopt MXINT4 weight quantization and propose an optimized dataflow tailored for HSA, ensuring negligible dequantization overhead and achieving 100% hardware utilization with minimal accuracy loss under edge DRAM bandwidth constraints. For non-linear operations, we incorporate optimized root mean square normalization (RMSNorm) and rotary position embedding (RoPE) units, reducing their latency, area, and memory access overhead while enabling end-to-end inference on our accelerator. Our solution achieves 247/117 (token/s/mm2) while running a 1.3B LLM on long-input/long-output scenarios, providing >2.45x/13.5x improvement over existing approaches, while maintaining superior energy efficiency in token generation.
Problem

Research questions and friction points this paper is trying to address.

Optimize edge LLM inference efficiency for prefill and decode stages
Reduce external memory access with MXINT4 quantization and HSA dataflow
Minimize latency and area for non-linear operations like RMSNorm and RoPE
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid systolic array optimizes edge LLM inference
MXINT4 quantization reduces external memory access
Optimized RMSNorm and RoPE units enhance efficiency
🔎 Similar Papers
No similar papers found.