🤖 AI Summary
Variational quantum algorithms (VQAs) face a fundamental trade-off between expressibility and trainability: increasing circuit expressibility often exacerbates the barren plateau phenomenon, leading to vanishing gradients and hindering optimization. Focusing on the variational quantum eigensolver (VQE), this work proposes a selective gate activation strategy to jointly mitigate this issue. We conduct the first systematic evaluation of three gate activation mechanisms and identify magnitude-based dynamic activation—where gates are activated according to the absolute values of their parameters—as the most effective. On multiple molecular Hamiltonian instances, it accelerates convergence by 42% over random activation, significantly suppresses gradient decay, and enhances optimization stability. Crucially, this strategy requires no increase in circuit depth or parameter count, thereby achieving an effective balance between expressibility and trainability. Our approach establishes a new paradigm for practical, scalable VQAs.
📝 Abstract
Hybrid quantum-classical computing relies heavily on Variational Quantum Algorithms (VQAs) to tackle challenges in diverse fields like quantum chemistry and machine learning. However, VQAs face a critical limitation: the balance between circuit trainability and expressibility. Trainability, the ease of optimizing circuit parameters for problem-solving, is often hampered by the Barren Plateau, where gradients vanish and hinder optimization. On the other hand, increasing expressibility, the ability to represent a wide range of quantum states, often necessitates deeper circuits with more parameters, which in turn exacerbates trainability issues. In this work, we investigate selective gate activation strategies as a potential solution to these challenges within the context of Variational Quantum Eigensolvers (VQEs). We evaluate three different approaches: activating gates randomly without considering their type or parameter magnitude, activating gates randomly but limited to a single gate type, and activating gates based on the magnitude of their parameter values. Experiment results reveal that the Magnitude-based strategy surpasses other methods, achieving improved convergence.