Characterizing and Optimizing Realistic Workloads on a Commercial Compute-in-SRAM Device

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior evaluations of Compute-in-Memory (CiM) SRAM architectures rely heavily on simulation or small-scale prototypes, failing to capture their real-world potential. Method: This work presents the first end-to-end empirical evaluation of CiM on commercial GSI APU hardware, introducing three optimizations—communication-aware tiling and mapping, DMA transfer coalescing, and broadcast-friendly data layout—to enhance data movement efficiency; it further develops a hybrid analytical model combining measurement-driven characterization with HBM bandwidth simulation to guide system-level optimization. Results: On 10–200 GB RAG retrieval workloads, the optimized CiM system achieves 4.8–6.6× speedup and 1.1–1.8× lower end-to-end latency over an optimized CPU baseline, while delivering 54.4–117.9× higher energy efficiency than an NVIDIA A6000 GPU. This study establishes a reproducible evaluation framework and optimization paradigm for practical CiM deployment.

Technology Category

Application Category

📝 Abstract
Compute-in-SRAM architectures offer a promising approach to achieving higher performance and energy efficiency across a range of data-intensive applications. However, prior evaluations have largely relied on simulators or small prototypes, limiting the understanding of their real-world potential. In this work, we present a comprehensive performance and energy characterization of a commercial compute-in-SRAM device, the GSI APU, under realistic workloads. We compare the GSI APU against established architectures, including CPUs and GPUs, to quantify its energy efficiency and performance potential. We introduce an analytical framework for general-purpose compute-in-SRAM devices that reveals fundamental optimization principles by modeling performance trade-offs, thereby guiding program optimizations. Exploiting the fine-grained parallelism of tightly integrated memory-compute architectures requires careful data management. We address this by proposing three optimizations: communication-aware reduction mapping, coalesced DMA, and broadcast-friendly data layouts. When applied to retrieval-augmented generation (RAG) over large corpora (10GB--200GB), these optimizations enable our compute-in-SRAM system to accelerate retrieval by 4.8$ imes$--6.6$ imes$ over an optimized CPU baseline, improving end-to-end RAG latency by 1.1$ imes$--1.8$ imes$. The shared off-chip memory bandwidth is modeled using a simulated HBM, while all other components are measured on the real compute-in-SRAM device. Critically, this system matches the performance of an NVIDIA A6000 GPU for RAG while being significantly more energy-efficient (54.4$ imes$-117.9$ imes$ reduction). These findings validate the viability of compute-in-SRAM for complex, real-world applications and provide guidance for advancing the technology.
Problem

Research questions and friction points this paper is trying to address.

Characterizing performance and energy of commercial Compute-in-SRAM device
Optimizing data management for memory-compute architectures
Accelerating retrieval-augmented generation workloads efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analytical framework for performance trade-offs modeling
Communication-aware reduction mapping optimization technique
Coalesced DMA and broadcast-friendly data layouts
🔎 Similar Papers
No similar papers found.