🤖 AI Summary
To address the severe DRAM bandwidth bottleneck in DNN accelerators that critically limits multi-layer network processing efficiency, this paper proposes a novel cross-layer fused DRAM communication scheduling paradigm, transcending conventional single-layer dataflow scheduling. We introduce a tensor-centric representation to formally model the DRAM scheduling space and design SoMa, an end-to-end compiler framework enabling joint optimization of data prefetching and write-back timing, hardware-aware compilation, and LLM workload-adaptive scheduling. Leveraging search-based scheduling-space exploration and design-space exploration (DSE), SoMa automatically generates hardware-compatible, multi-layer fused schedules. Experimental evaluation demonstrates that SoMa achieves 2.11× higher average performance and 37.3% better energy efficiency over the state-of-the-art Cocco framework, and has been integrated into a commercial accelerator compiler.
📝 Abstract
Modern Deep Neural Network (DNN) accelerators are equipped with increasingly larger on-chip buffers to provide more opportunities to alleviate the increasingly severe DRAM bandwidth pressure. However, most existing research on buffer utilization still primarily focuses on single-layer dataflow scheduling optimization. As buffers grow large enough to accommodate most single-layer weights in most networks, the impact of single-layer dataflow optimization on DRAM communication diminishes significantly. Therefore, developing new paradigms that fuse multiple layers to fully leverage the increasingly abundant on-chip buffer resources to reduce DRAM accesses has become particularly important, yet remains an open challenge. To address this challenge, we first identify the optimization opportunities in DRAM communication scheduling by analyzing the drawbacks of existing works on the layer fusion paradigm and recognizing the vast optimization potential in scheduling the timing of data prefetching from and storing to DRAM. To fully exploit these optimization opportunities, we develop a Tensor-centric Notation and its corresponding parsing method to represent different DRAM communication scheduling schemes and depict the overall space of DRAM communication scheduling. Then, to thoroughly and efficiently explore the space of DRAM communication scheduling for diverse accelerators and workloads, we develop an end-to-end scheduling framework, SoMa, which has already been developed into a compiler for our commercial accelerator product. Compared with the state-of-the-art (SOTA) Cocco framework, SoMa achieves, on average, a 2.11x performance improvement and a 37.3% reduction in energy cost simultaneously. Then, we leverage SoMa to study optimizations for LLM, perform design space exploration (DSE), and analyze the DRAM communication scheduling space through a practical example, yielding some..(more)