๐ค AI Summary
Existing tools struggle to provide fine-grained tracing of the UCX communication layer and are often confined to specific MPI implementations, hindering effective correlation between high-level MPI calls and low-level device communication behavior. To address this limitation, this work proposes ucTraceโthe first general-purpose tool enabling fine-grained communication analysis at the UCX level. By jointly tracing process- and device-level events, ucTrace accurately maps communication operations across host, GPU, and NIC back to their originating MPI functions and offers interactive visualization. Crucially, ucTrace is independent of any particular MPI library and supports heterogeneous environments such as GPU-accelerated systems. It has been successfully applied to diverse HPC workloads, including point-to-point communication optimization, Allreduce performance comparison across multiple MPI libraries, communication profiling in linear solvers, NUMA binding evaluation, and large-scale GPU-accelerated GROMACS simulations.
๐ Abstract
UCX is a communication framework that enables low-latency, high-bandwidth communication in HPC systems. With its unified API, UCX facilitates efficient data transfers across multi-node CPU-GPU clusters. UCX is widely used as the transport layer for MPI, particularly in GPU-aware implementations. However, existing profiling tools lack fine-grained communication traces at the UCX level, do not capture transport-layer behavior, or are limited to specific MPI implementations.
To address these gaps, we introduce ucTrace, a novel profiler that exposes and visualizes UCX-driven communication in HPC environments. ucTrace provides insights into MPI workflows by profiling message passing at the UCX level, linking operations between hosts and devices (e.g., GPUs and NICs) directly to their originating MPI functions. Through interactive visualizations of process- and device-specific interactions, ucTrace helps system administrators, library and application developers optimize performance and debug communication patterns in large-scale workloads. We demonstrate ucTrace's features through a wide range of experiments including MPI point-to-point behavior under different UCX settings, Allreduce comparisons across MPI libraries, communication analysis of a linear solver, NUMA binding effects, and profiling of GROMACS MD simulations with GPU acceleration at scale. ucTrace is publicly available at https://github.com/ParCoreLab/ucTrace.