Dual-Stage Invariant Continual Learning under Extreme Visual Sparsity

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of representation drift in continual learning under extremely visually sparse scenarios—such as spaceborne object detection—where background-dominated inputs degrade the performance of conventional methods. The authors propose a two-stage invariant continual learning framework that, for the first time, elucidates how background gradients induce representation drift under sparsity. Moving beyond output-only distillation, the approach introduces dual structural and semantic consistency constraints by jointly distilling intermediate features and detection predictions. This is further enhanced with sparsity-aware data conditioning, distribution-aware augmentation, and block-wise sampling strategies to suppress error propagation while preserving model adaptability. Evaluated on high-resolution space object detection benchmarks, the method achieves a 4.0 mAP improvement over state-of-the-art techniques and demonstrates significantly enhanced robustness during continuous domain shifts.
📝 Abstract
Continual learning seeks to maintain stable adaptation under non-stationary environments, yet this problem becomes particularly challenging in object detection, where most existing methods implicitly assume relatively balanced visual conditions. In extreme-sparsity regimes, such as those observed in space-based resident space object (RSO) detection scenarios, foreground signals are overwhelmingly dominated by background observations. Under such conditions, we analytically demonstrate that background-driven gradients destabilize the feature backbone during sequential domain shifts, causing progressive representation drift. This exposes a structural limitation of continual learning approaches relying solely on output-level distillation, as they fail to preserve intermediate representation stability. To address this, we propose a dual-stage invariant continual learning framework via joint distillation, enforcing structural and semantic consistency on both backbone representations and detection predictions, respectively, thereby suppressing error propagation at its source while maintaining adaptability. Furthermore, to regulate gradient statistics under severe imbalance, we introduce a sparsity-aware data conditioning strategy combining patch-based sampling and distribution-aware augmentation. Experiments on a high-resolution space-based RSO detection dataset show consistent improvement over established continual object detection methods, achieving an absolute gain of +4.0 mAP under sequential domain shifts.
Problem

Research questions and friction points this paper is trying to address.

continual learning
visual sparsity
object detection
representation drift
domain shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

dual-stage invariant learning
joint distillation
representation stability
sparsity-aware data conditioning
continual object detection
🔎 Similar Papers
No similar papers found.