🤖 AI Summary
To address the dual bottlenecks of compute-intensive encoder operations and memory-constrained decoder execution in edge-deployed multimodal large language models (MLLMs), this work proposes a heterogeneous AI-accelerated multicore CPU architecture. It integrates a computation-optimized systolic array with a digital compute-in-memory (CIM) co-processor, and introduces the first activation-aware dynamic weight quantization and bandwidth-coordinated scheduling mechanism. Key contributions include: (1) a heterogeneous core co-execution architecture tailored to MLLM’s dual bottlenecks; (2) a hardware-friendly real-time sparse weight mapping method; and (3) an activation-driven, bandwidth-adaptive scheduling strategy. Fabricated in 22 nm commercial CMOS technology, the prototype achieves a 2.84× speedup over an RTX 3060 GPU on representative MLLM inference workloads, significantly improving both edge inference throughput and energy efficiency.
📝 Abstract
Emerging multimodal LLMs (MLLMs) exhibit strong cross-modality perception and reasoning capabilities and hold great potential for various applications at edge. However, MLLMs typically consist of a compute-intensive modality encoder and a memory-bound LLM decoder, leading to distinct bottlenecks for hardware designs. In this work, we present a multi-core CPU solution with heterogeneous AI extensions, which are based on either the compute-centric systolic array or memory-centric digital compute-in-memory (CIM) co-processors. In addition, dynamic activation-aware weight pruning and bandwidth management are developed to enhance bandwidth efficiency and core utilization, improving overall performance. We implemented our solution using commercial 22nm technology. For representative MLLMs, our evaluations show EdgeMM can achieve 2.84x performance speedup compared to laptop 3060 GPU.