🤖 AI Summary
Modern computing systems offload tasks to hardware accelerators to improve energy efficiency; however, increasing accelerator complexity exacerbates CPU overheads in configuration and synchronization, creating a performance-limiting “configuration wall”—particularly diminishing the practical benefits of high-performance accelerators. This work introduces the first extended Roofline model capable of quantifying configuration bottlenecks and develops an MLIR-based domain-specific compiler framework that eliminates and masks configuration overhead across architectures via compiler-driven abstraction, systematic optimization traversal, and latency-hiding mechanisms. Key contributions include the first generalizable methodology for identifying the configuration wall and a deep, root-cause modeling framework enabling automated optimization of configuration-induced performance degradation. Evaluated on the OpenGeMM system, the approach achieves a 2× geometric mean performance improvement.
📝 Abstract
Contemporary compute platforms increasingly offload compute kernels from CPU to integrated hardware accelerators to reach maximum performance per Watt. Unfortunately, the time the CPU spends on setup control and synchronization has increased with growing accelerator complexity. For systems with complex accelerators, this means that performance can be configuration-bound. Faster accelerators are more severely impacted by this overlooked performance drop, which we call the configuration wall. Prior work evidences this wall and proposes ad-hoc solutions to reduce configuration overhead. However, these solutions are not universally applicable, nor do they offer comprehensive insights into the underlying causes of performance degradation. In this work, we first introduce a widely-applicable variant of the well-known roofline model to quantify when system performance is configuration-bound. To move systems out of the performance-bound region, we subsequently propose a domain-specific compiler abstraction and associated optimization passes. We implement the abstraction and passes in the MLIR compiler framework to run optimized binaries on open-source architectures to prove its effectiveness and generality. Experiments demonstrate a geomean performance boost of 2x on the open-source OpenGeMM system, by eliminating redundant configuration cycles and by automatically hiding the remaining configuration cycles. Our work provides key insights in how accelerator performance is affected by setup mechanisms, thereby facilitating automatic code generation for circumventing the configuration wall.