Intrinsic preservation of plasticity in continual quantum learning

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional deep learning suffers from plasticity loss and catastrophic forgetting in continual learning, primarily due to unbounded weight growth and pathological gradient behavior. To address this, we propose a unitary-constrained quantum learning model that restricts parameter optimization to a compact manifold, thereby intrinsically suppressing gradient explosion and saturation via physical dynamical constraints—ensuring persistent plasticity. The model employs a quantum neural network architecture compatible with both supervised and reinforcement learning paradigms, and processes both high-dimensional classical images and native quantum data. Experiments demonstrate long-term performance stability across sequential multi-task continual learning benchmarks, with no degradation as task sequence length increases—significantly outperforming classical baselines. This work provides the first systematic characterization of how unitary dynamics fundamentally preserve plasticity in continual learning, establishing a novel paradigm for robust, adaptive intelligent systems.

Technology Category

Application Category

📝 Abstract
Artificial intelligence in dynamic, real-world environments requires the capacity for continual learning. However, standard deep learning suffers from a fundamental issue: loss of plasticity, in which networks gradually lose their ability to learn from new data. Here we show that quantum learning models naturally overcome this limitation, preserving plasticity over long timescales. We demonstrate this advantage systematically across a broad spectrum of tasks from multiple learning paradigms, including supervised learning and reinforcement learning, and diverse data modalities, from classical high-dimensional images to quantum-native datasets. Although classical models exhibit performance degradation correlated with unbounded weight and gradient growth, quantum neural networks maintain consistent learning capabilities regardless of the data or task. We identify the origin of the advantage as the intrinsic physical constraints of quantum models. Unlike classical networks where unbounded weight growth leads to landscape ruggedness or saturation, the unitary constraints confine the optimization to a compact manifold. Our results suggest that the utility of quantum computing in machine learning extends beyond potential speedups, offering a robust pathway for building adaptive artificial intelligence and lifelong learners.
Problem

Research questions and friction points this paper is trying to address.

Overcoming loss of plasticity in classical deep learning systems
Preserving long-term learning capability across diverse data modalities
Maintaining consistent optimization on compact quantum manifolds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum models preserve plasticity via unitary constraints
Quantum networks maintain learning on diverse data types
Physical constraints prevent weight growth in quantum optimization
🔎 Similar Papers
No similar papers found.