🤖 AI Summary
The quantum computing benchmarking ecosystem suffers from fragmentation and poor interoperability across frameworks. Method: This paper proposes a platform-agnostic, modular benchmarking architecture that decouples problem generation, circuit execution, and result analysis into standardized, interchangeable components. It formally specifies universal interface standards supporting over 20 benchmark variants and multi-backend circuit generation APIs—including Qiskit, Cirq, and CUDA-Q—while integrating pyGSTi for higher-order characterization-layer analysis. The architecture natively supports plug-and-play extension to emerging benchmarks such as dynamic circuits and quantum reinforcement learning. Built upon multi-GPU simulation and heterogeneous environment adaptation, it delivers a unified, scalable cross-platform evaluation workflow. Contribution/Results: The architecture has been successfully deployed in the QED-C application-oriented benchmark suite, demonstrating seamless integration across complex benchmarks, diverse execution modes (e.g., simulation, hardware), and heterogeneous analysis toolchains.
📝 Abstract
We present a platform-agnostic modular architecture that addresses the increasingly fragmented landscape of quantum computing benchmarking by decoupling problem generation, circuit execution, and results analysis into independent, interoperable components. Supporting over 20 benchmark variants ranging from simple algorithmic tests like Bernstein-Vazirani to complex Hamiltonian simulation with observable calculations, the system integrates with multiple circuit generation APIs (Qiskit, CUDA-Q, Cirq) and enables diverse workflows. We validate the architecture through successful integration with Sandia's $ extit{pyGSTi}$ for advanced circuit analysis and CUDA-Q for multi-GPU HPC simulations. Extensibility of the system is demonstrated by implementing dynamic circuit variants of existing benchmarks and a new quantum reinforcement learning benchmark, which become readily available across multiple execution and analysis modes. Our primary contribution is identifying and formalizing modular interfaces that enable interoperability between incompatible benchmarking frameworks, demonstrating that standardized interfaces reduce ecosystem fragmentation while preserving optimization flexibility. This architecture has been developed as a key enhancement to the continually evolving QED-C Application-Oriented Performance Benchmarks for Quantum Computing suite.