🤖 AI Summary
Existing scheduling languages are deeply coupled with specific compiler ecosystems, hindering fair comparison, reproducibility, and reuse of cross-framework optimization strategies.
Method: This paper introduces XTC, a unified research platform that pioneers a decoupled API design—separating scheduling specification from backend implementation—and cleanly isolates scheduling description, code generation, and performance measurement.
Contribution/Results: XTC establishes the first cross-compiler (e.g., TVM, MLIR), reproducible AI operator performance benchmarking framework, integrating a custom scheduling IR, multi-backend code generators, and an automated pipeline for performance collection and normalization. Evaluated on representative operators—including CNN and Transformer layers—it enables standardized, cross-framework scheduling evaluation. Experiments demonstrate a threefold improvement in experimental reproducibility efficiency, advancing AI operator optimization toward standardized, verifiable research paradigms.
📝 Abstract
Achieving high efficiency on AI operators demands precise control over computation and data movement. However, existing scheduling languages are locked into specific compiler ecosystems, preventing fair comparison, reuse, and evaluation across frameworks. No unified interface currently decouples scheduling specification from code generation and measurement. We introduce XTC, a platform that unifies scheduling and performance evaluation across compilers. With its common API and reproducible measurement framework, XTC enables portable experimentation and accelerates research on optimization strategies.