🤖 AI Summary
This work proposes BRIDG-Q, a novel approach that addresses the challenge of barren plateaus in variational quantum algorithms caused by poor parameter initialization—particularly acute when large language models (LLMs) generate flexible, problem-adaptive quantum circuit structures lacking effective initialization strategies. BRIDG-Q uniquely integrates LLM-generated circuits with data-driven empirical Bayesian parameter initialization: it employs AgentQ to construct problem-conditioned circuit topologies, removes unreliable parameters, and injects initial values derived from BEINIT. By transcending conventional fixed-template paradigms, this method enables neurosymbolic collaboration for quantum circuit initialization. Evaluated on graph optimization benchmarks, BRIDG-Q substantially enhances optimization robustness, achieving instance-wise optimal selections that reduce the final residual energy by approximately 10% on average.
📝 Abstract
Quantum circuit initialisation is a key bottleneck in variational quantum algorithms (VQAs), strongly impacting optimisation stability and convergence. Recent work shows that large language models (LLMs) can synthesise high-quality variational circuit architectures, but their continuous parameter predictions are unreliable. Conversely, data-driven initialisation methods such as BEINIT improve trainability via problem-adaptive priors, yet assume fixed ansatz templates and ignore generative circuit structure. We propose BRIDG-Q (Barren-Plateau-Resilient Initialisation with Data-Aware LLM-Generated Quantum Circuits), a neuro-symbolic pipeline that bridges this gap by coupling LLM-generated circuit architectures with empirical-Bayes parameter initialisation. BRIDG-Q uses AgentQ to generate problem-conditioned circuit topologies, removes generated parameters, and injects data-informed parameter initialisations to mitigate barren plateau effects. Evaluations on graph optimisation benchmarks using residual energy gap and convergence metrics show improved optimisation robustness, indicating that data-driven initialisation remains effective even for LLM-generated circuits, with oracle per-instance selection achieving approximately a 10% reduction in final residual energy.