Bridging quantum and classical computing for partial differential equations through multifidelity machine learning

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
On near-term quantum hardware, solving partial differential equations (PDEs) suffers from limited accuracy due to insufficient qubits (low spatial resolution) and shallow circuit depth (inaccurate temporal integration). Method: This paper proposes the first multi-fidelity machine learning framework tailored for quantum PDE solvers. It synergistically integrates low-fidelity quantum simulations—e.g., quantum lattice Boltzmann methods—with sparse high-fidelity classical data within a hierarchical neural network architecture capable of both linear and nonlinear modeling, enabling cross-fidelity correction and temporal extrapolation. Contribution/Results: By introducing multi-fidelity learning into quantum numerical PDE solving, our approach overcomes hardware-imposed accuracy bottlenecks. Validation on the viscous Burgers equation and incompressible Navier–Stokes equations demonstrates that corrected solutions closely approximate high-fidelity classical simulations, substantially reducing reliance on costly full-scale classical computations.

Technology Category

Application Category

📝 Abstract
Quantum algorithms for partial differential equations (PDEs) face severe practical constraints on near-term hardware: limited qubit counts restrict spatial resolution to coarse grids, while circuit depth limitations prevent accurate long-time integration. These hardware bottlenecks confine quantum PDE solvers to low-fidelity regimes despite their theoretical potential for computational speedup. We introduce a multifidelity learning framework that corrects coarse quantum solutions to high-fidelity accuracy using sparse classical training data, facilitating the path toward practical quantum utility for scientific computing. The approach trains a low-fidelity surrogate on abundant quantum solver outputs, then learns correction mappings through a multifidelity neural architecture that balances linear and nonlinear transformations. Demonstrated on benchmark nonlinear PDEs including viscous Burgers equation and incompressible Navier-Stokes flows via quantum lattice Boltzmann methods, the framework successfully corrects coarse quantum predictions and achieves temporal extrapolation well beyond the classical training window. This strategy illustrates how one can reduce expensive high-fidelity simulation requirements while producing predictions that are competitive with classical accuracy. By bridging the gap between hardware-limited quantum simulations and application requirements, this work establishes a pathway for extracting computational value from current quantum devices in real-world scientific applications, advancing both algorithm development and practical deployment of near-term quantum computing for computational physics.
Problem

Research questions and friction points this paper is trying to address.

Corrects coarse quantum PDE solutions to high-fidelity accuracy using classical data
Bridges hardware-limited quantum simulations with real-world scientific application requirements
Reduces expensive high-fidelity simulation needs while achieving classical-level accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multifidelity learning corrects coarse quantum solutions
Uses sparse classical data for high-fidelity accuracy
Balances linear and nonlinear transformations in neural architecture
🔎 Similar Papers
No similar papers found.