🤖 AI Summary
This work addresses three key challenges in real-time GPU performance monitoring at planetary scale: user privacy leakage, runtime overhead, and difficulty in data attribution across massive deployments. We propose the first end-to-end privacy-preserving GPU performance profiling architecture that operates at planetary scale with zero performance overhead. Methodologically, we design a lightweight kernel-level monitoring agent—built upon NSYS/NCU—that integrates differential privacy, secure aggregation, and distributed sampling to enable application-agnostic, anonymized kernel-level behavior attribution and resource accounting. Evaluated on a simulated 100,000-GPU cluster, our system achieves complete telemetry collection while enabling precise application-level attribution for realistic deep learning workloads (e.g., TorchBench) with no third-party privacy leakage. The architecture delivers trustworthy, scalable hardware performance insights for chip design and systems optimization.
📝 Abstract
GPUs are the dominant platform for many important applications today including deep learning, accelerated computing, and scientific simulation. However, as the complexity of both applications and hardware increases, GPU chip manufacturers face a significant challenge: how to gather comprehensive performance characteristics and value profiles from GPUs deployed in real-world scenarios. Such data, encompassing the types of kernels executed and the time spent in each, is crucial for optimizing chip design and enhancing application performance. Unfortunately, despite the availability of low-level tools like NSYS and NCU, current methodologies fall short, offering data collection capabilities only on an individual user basis rather than a broader, more informative fleet-wide scale. This paper takes on the problem of realizing a system that allows planet-scale real-time GPU performance profiling of low-level hardware characteristics. The three fundamental problems we solve are: i) user experience of achieving this with no slowdown; ii) preserving user privacy, so that no 3rd party is aware of what applications any user runs; iii) efficacy in showing we are able to collect data and assign it applications even when run on 1000s of GPUs. Our results simulate a 100,000 size GPU deployment, running applications from the Torchbench suite, showing our system addresses all 3 problems.