TITAN: A Trajectory-Informed Technique for Adaptive Parameter Freezing in Large-Scale VQE

📅 2025-09-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large-scale variational quantum eigensolver (VQE) training faces two critical bottlenecks: high gradient evaluation overhead—scaling linearly with the number of parameters—and slow convergence due to barren plateaus, causing exponential growth in measurement cost. To address these challenges, this work introduces TITAN, a novel framework that dynamically predicts parameter importance via trajectory-based analysis during training. TITAN integrates barren-plateau-resilient data construction with an adaptive neural architecture to intelligently identify and freeze inactive parameters early in optimization. Crucially, it requires no additional quantum resources and generalizes across diverse ansätze. On 30-qubit systems, TITAN reduces circuit evaluations by 40–60% and accelerates convergence threefold, while preserving or improving ground-state energy estimation accuracy. These advances significantly enhance the scalability of VQE for quantum chemistry and materials simulation.

Technology Category

Application Category

📝 Abstract
Variational quantum Eigensolver (VQE) is a leading candidate for harnessing quantum computers to advance quantum chemistry and materials simulations, yet its training efficiency deteriorates rapidly for large Hamiltonians. Two issues underlie this bottleneck: (i) the no-cloning theorem imposes a linear growth in circuit evaluations with the number of parameters per gradient step; and (ii) deeper circuits encounter barren plateaus (BPs), leading to exponentially increasing measurement overheads. To address these challenges, here we propose a deep learning framework, dubbed Titan, which identifies and freezes inactive parameters of a given ansatze at initialization for a specific class of Hamiltonians, reducing the optimization overhead without sacrificing accuracy. The motivation of Titan starts with our empirical findings that a subset of parameters consistently has a negligible influence on training dynamics. Its design combines a theoretically grounded data construction strategy, ensuring each training example is informative and BP-resilient, with an adaptive neural architecture that generalizes across ansatze of varying sizes. Across benchmark transverse-field Ising models, Heisenberg models, and multiple molecule systems up to 30 qubits, Titan achieves up to 3 times faster convergence and 40% to 60% fewer circuit evaluations than state-of-the-art baselines, while matching or surpassing their estimation accuracy. By proactively trimming parameter space, Titan lowers hardware demands and offers a scalable path toward utilizing VQE to advance practical quantum chemistry and materials science.
Problem

Research questions and friction points this paper is trying to address.

Reduces VQE training overhead for large Hamiltonians
Addresses barren plateaus and measurement overheads
Freezes inactive parameters without sacrificing accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive parameter freezing technique
Deep learning framework for VQE
BP-resilient data construction strategy
🔎 Similar Papers
No similar papers found.