🤖 AI Summary
To address the challenges of identifying performance bottlenecks, high analysis overhead, and unclear attribution of variability in large-scale GPU tracing data within heterogeneous HPC environments, this paper proposes the first end-to-end distributed analytical framework integrating causal graph modeling with parallel coordination graphs. The framework enables concurrent processing of multi-GPU traces and causal inference of performance variability through distributed data partitioning, pipelined parallel computation, scalable causal graph construction, and coordinated visualization. Its core innovation lies in introducing causal inference into GPU performance tracing analysis—enabling, for the first time, cross-trace execution dependency modeling and precise root-cause localization of bottlenecks. Experimental results demonstrate that, compared to baseline methods, the framework improves scalability by 67% in multi-trace independent analysis and significantly accelerates performance bottleneck identification and diagnosis.
📝 Abstract
Large-scale GPU traces play a critical role in identifying performance bottlenecks within heterogeneous High-Performance Computing (HPC) architectures. However, the sheer volume and complexity of a single trace of data make performance analysis both computationally expensive and time-consuming. To address this challenge, we present an end-to-end parallel performance analysis framework designed to handle multiple large-scale GPU traces efficiently. Our proposed framework partitions and processes trace data concurrently and employs causal graph methods and parallel coordinating chart to expose performance variability and dependencies across execution flows. Experimental results demonstrate a 67% improvement in terms of scalability, highlighting the effectiveness of our pipeline for analyzing multiple traces independently.