ALPHA-PIM: Analysis of Linear Algebraic Processing for High-Performance Graph Applications on a Real Processing-In-Memory System

📅 2025-10-12
🏛️ IEEE International Symposium on Workload Characterization
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the memory wall bottleneck in large-scale graph processing, which stems from frequent data movement between processors and memory. For the first time, it presents a systematic evaluation of representative graph algorithms on a commercial Processing-in-Memory (PIM) platform—UPMEM—leveraging an algebraic formulation to guide optimizations. By integrating DMA-based data transfers, multi-core parallelism, and tailored data partitioning strategies, the approach substantially reduces data movement overhead. Experimental comparisons against CPU and GPU baselines uncover critical limitations in current PIM architectures concerning computation throughput, communication efficiency, and memory subsystem design. The study further identifies key optimization directions, including enhancing instruction-level parallelism, enabling non-blocking DMA operations, and incorporating direct interconnects among PIM cores, thereby offering valuable insights for future PIM architecture development.

Technology Category

Application Category

📝 Abstract
Processing large-scale graph datasets is computationally intensive and time-consuming. Processor-centric CPU and GPU architectures, commonly used for graph applications, often face bottlenecks caused by extensive data movement between the processor and memory units due to low data reuse. As a result, these applications are often memory-bound, limiting both performance and energy efficiency due to excessive data transfers. Processing-In-Memory (PIM) offers a promising approach to mitigate data movement bottlenecks by integrating computation directly within or near memory. Although several previous studies have introduced custom PIM proposals for graph processing, they do not leverage real-world PIM systems.This work aims to explore the capabilities and characteristics of common graph algorithms on a real-world PIM system to accelerate data-intensive graph workloads. To this end, we (1) implement representative graph algorithms on UPMEM’s general-purpose PIM architecture; (2) characterize their performance and identify key bottlenecks; (3) compare results against CPU and GPU baselines; and (4) derive insights to guide future PIM hardware design.Our study underscores the importance of selecting optimal data partitioning strategies across PIM cores to maximize performance. Additionally, we identify critical hardware limitations in current PIM architectures and emphasize the need for future enhancements across computation, memory, and communication subsystems. Key opportunities for improvement include increasing instruction-level parallelism, developing improved DMA engines with non-blocking capabilities, and enabling direct interconnection networks among PIM cores to reduce data transfer overheads.
Problem

Research questions and friction points this paper is trying to address.

graph processing
memory bottleneck
data movement
Processing-In-Memory
performance optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Processing-In-Memory
Graph Processing
UPMEM PIM
Data Partitioning
Memory-Bound Optimization
🔎 Similar Papers
No similar papers found.