XDMA: A Distributed, Extensible DMA Architecture for Layout-Flexible Data Movements in Heterogeneous Multi-Accelerator SoCs

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low-efficiency flexible data layout migration and the inability of conventional DMA controllers to support non-contiguous memory access and real-time format conversion in heterogeneous multi-accelerator SoCs, this paper proposes XDMA—a distributed, scalable DMA architecture. Methodologically, XDMA replaces software-based loop control with hardware-streaming address generation to decouple distributed control from pipelined data flow, and incorporates plug-and-play functional modules enabling in-transit data layout transformation and lightweight computation. The design co-optimizes memory access patterns and on-chip interconnect protocols. Experimental results show that XDMA achieves a 151.2× improvement in link utilization under synthetic workloads and delivers an average 2.3× speedup over state-of-the-art DMA controllers on real AI workloads. With an area overhead of less than 2% and power consumption accounting for only 17% of the total system, XDMA offers significant efficiency gains while maintaining hardware scalability and flexibility.

Technology Category

Application Category

📝 Abstract
As modern AI workloads increasingly rely on heterogeneous accelerators, ensuring high-bandwidth and layout-flexible data movements between accelerator memories has become a pressing challenge. Direct Memory Access (DMA) engines promise high bandwidth utilization for data movements but are typically optimal only for contiguous memory access, thus requiring additional software loops for data layout transformations. This, in turn, leads to excessive control overhead and underutilized on-chip interconnects. To overcome this inefficiency, we present XDMA, a distributed and extensible DMA architecture that enables layout-flexible data movements with high link utilization. We introduce three key innovations: (1) a data streaming engine as XDMA Frontend, replacing software address generators with hardware ones; (2) a distributed DMA architecture that maximizes link utilization and separates configuration from data transfer; (3) flexible plugins for XDMA enabling on-the-fly data manipulation during data transfers. XDMA demonstrates up to 151.2x/8.2x higher link utilization than software-based implementations in synthetic workloads and achieves 2.3x average speedup over accelerators with SoTA DMA in real-world applications. Our design incurs <2% area overhead over SoTA DMA solutions while consuming 17% of system power. XDMA proves that co-optimizing memory access, layout transformation, and interconnect protocols is key to unlocking heterogeneous multi-accelerator SoC performance.
Problem

Research questions and friction points this paper is trying to address.

Enables layout-flexible data movements in heterogeneous multi-accelerator SoCs
Replaces software address generators with hardware for high link utilization
Reduces control overhead and optimizes on-chip interconnect usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hardware address generators replace software loops
Distributed DMA separates config from data transfer
Flexible plugins enable on-the-fly data manipulation
🔎 Similar Papers
No similar papers found.