QAGT-MLP: An Attention-Based Graph Transformer for Small and Large-Scale Quantum Error Mitigation

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing quantum error mitigation techniques suffer from trade-offs among accuracy, efficiency, and scalability: traditional approaches (e.g., extrapolation, quasi-probability cancellation) incur high calibration overhead, while learning-based methods lack generalizability to large-scale, deep quantum circuits. To address this, we propose an attention-based Graph Transformer model that encodes quantum circuits as gate-level graphs. Our method introduces a novel dual-path attention mechanism—integrating global graph structure with local light-cone contextual information—enabling efficient error correction without additional sampling resources. The architecture synergistically combines graph neural networks, self-attention, dual-path feature fusion, and a lightweight MLP, significantly improving both accuracy and robustness. Experiments on 100-qubit transverse-field Ising model (TFIM) circuits demonstrate superior mean error reduction and lower variance compared to state-of-the-art learning-based mitigators, validating its practical applicability for near-term noisy intermediate-scale quantum (NISQ) processors.

Technology Category

Application Category

📝 Abstract
Noisy quantum devices demand error-mitigation techniques to be accurate yet simple and efficient in terms of number of shots and processing time. Many established approaches (e.g., extrapolation and quasi-probability cancellation) impose substantial execution or calibration overheads, while existing learning-based methods have difficulty scaling to large and deep circuits. In this research, we introduce QAGT-MLP: an attention-based graph transformer tailored for small- and large-scale quantum error mitigation (QEM). QAGT-MLP encodes each quantum circuit as a graph whose nodes represent gate instances and whose edges capture qubit connectivity and causal adjacency. A dual-path attention module extracts features around measured qubits at two scales or contexts: 1) graph-wide global structural context; and 2) fine-grained local lightcone context. These learned representations are concatenated with circuit-level descriptor features and the circuit noisy expected values, then they are passed to a lightweight MLP to predict the noise-mitigated values. On large-scale 100-qubit Trotterized 1D Transverse-Field Ising Models -- TFIM circuits -- the proposed QAGT-MLP outperformed state-of-the-art learning baselines in terms of mean error and error variability, demonstrating strong validity and applicability in real-world QEM scenarios under matched shot budgets. By using attention to fuse global structures with local lightcone neighborhoods, QAGT-MLP achieves high mitigation quality without the increasing noise scaling or resource demand required by classical QEM pipelines, while still offering a scalable and practical path to QEM in modern and future quantum workloads.
Problem

Research questions and friction points this paper is trying to address.

Mitigating quantum errors in both small and large-scale circuits efficiently
Overcoming execution overhead of classical methods and scalability limitations of learning approaches
Achieving high error mitigation quality without increasing resource demands
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-based graph transformer for quantum error mitigation
Dual-path attention module extracts global and local contexts
Lightweight MLP predicts noise-mitigated values from learned representations
🔎 Similar Papers
No similar papers found.
Seyed Mohamad Ali Tousi
Seyed Mohamad Ali Tousi
University of Missouri Columbia
Computer VisionArtificial IntelligenceOptimization Algorithms
G
G. N. DeSouza
Vision-Guided and Intelligent Robotics Lab (ViGIR), University of Missouri, Columbia, US