EdgeMM: Multi-Core CPU with Heterogeneous AI-Extension and Activation-aware Weight Pruning for Multimodal LLMs at Edge

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual bottlenecks of compute-intensive encoder operations and memory-constrained decoder execution in edge-deployed multimodal large language models (MLLMs), this work proposes a heterogeneous AI-accelerated multicore CPU architecture. It integrates a computation-optimized systolic array with a digital compute-in-memory (CIM) co-processor, and introduces the first activation-aware dynamic weight quantization and bandwidth-coordinated scheduling mechanism. Key contributions include: (1) a heterogeneous core co-execution architecture tailored to MLLM’s dual bottlenecks; (2) a hardware-friendly real-time sparse weight mapping method; and (3) an activation-driven, bandwidth-adaptive scheduling strategy. Fabricated in 22 nm commercial CMOS technology, the prototype achieves a 2.84× speedup over an RTX 3060 GPU on representative MLLM inference workloads, significantly improving both edge inference throughput and energy efficiency.

Technology Category

Application Category

📝 Abstract
Emerging multimodal LLMs (MLLMs) exhibit strong cross-modality perception and reasoning capabilities and hold great potential for various applications at edge. However, MLLMs typically consist of a compute-intensive modality encoder and a memory-bound LLM decoder, leading to distinct bottlenecks for hardware designs. In this work, we present a multi-core CPU solution with heterogeneous AI extensions, which are based on either the compute-centric systolic array or memory-centric digital compute-in-memory (CIM) co-processors. In addition, dynamic activation-aware weight pruning and bandwidth management are developed to enhance bandwidth efficiency and core utilization, improving overall performance. We implemented our solution using commercial 22nm technology. For representative MLLMs, our evaluations show EdgeMM can achieve 2.84x performance speedup compared to laptop 3060 GPU.
Problem

Research questions and friction points this paper is trying to address.

Addressing compute and memory bottlenecks in multimodal LLMs at edge
Enhancing bandwidth efficiency with activation-aware weight pruning
Improving performance via heterogeneous AI-extended multi-core CPU design
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-core CPU with heterogeneous AI extensions
Dynamic activation-aware weight pruning technique
Bandwidth management for core utilization
🔎 Similar Papers
No similar papers found.
K
Kangbo Bai
School of Integrated Circuits, Peking University, Beijing, China
L
Le Ye
School of Integrated Circuits, Peking University, Beijing, China
R
Ru Huang
School of Integrated Circuits, Peking University, Beijing, China
Tianyu Jia
Tianyu Jia
Assistant Professor, Peking University
VLSI DesignComputer Architecture