Orthogonal Projection Subspace to Aggregate Online Prior-knowledge for Continual Test-time Adaptation

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address catastrophic forgetting and error accumulation in continual test-time adaptation (CTTA) for semantic segmentation, this paper proposes an orthogonal projection subspace constraint and online prior aggregation framework. Methodologically: (i) it enforces model updates within an orthogonal projection subspace to explicitly preserve source-domain knowledge; (ii) it introduces an active image masking–based online prior aggregation mechanism to improve pseudo-label quality between teacher and student models; and (iii) it integrates parameter-efficient fine-tuning, online knowledge distillation, and mask-based augmentation to enable lightweight, dynamically pseudo-label–driven adaptation. Evaluated on multiple CTTA semantic segmentation benchmarks, the method achieves state-of-the-art performance—significantly outperforming existing approaches—while maintaining high accuracy, mitigating error propagation, and balancing adaptation effectiveness with computational efficiency.

Technology Category

Application Category

📝 Abstract
Continual Test Time Adaptation (CTTA) is a task that requires a source pre-trained model to continually adapt to new scenarios with changing target distributions. Existing CTTA methods primarily focus on mitigating the challenges of catastrophic forgetting and error accumulation. Though there have been emerging methods based on forgetting adaptation with parameter-efficient fine-tuning, they still struggle to balance competitive performance and efficient model adaptation, particularly in complex tasks like semantic segmentation. In this paper, to tackle the above issues, we propose a novel pipeline, Orthogonal Projection Subspace to aggregate online Prior-knowledge, dubbed OoPk. Specifically, we first project a tuning subspace orthogonally which allows the model to adapt to new domains while preserving the knowledge integrity of the pre-trained source model to alleviate catastrophic forgetting. Then, we elaborate an online prior-knowledge aggregation strategy that employs an aggressive yet efficient image masking strategy to mimic potential target dynamism, enhancing the student model's domain adaptability. This further gradually ameliorates the teacher model's knowledge, ensuring high-quality pseudo labels and reducing error accumulation. We demonstrate our method with extensive experiments that surpass previous CTTA methods and achieve competitive performances across various continual TTA benchmarks in semantic segmentation tasks.
Problem

Research questions and friction points this paper is trying to address.

Balance performance and efficient adaptation in continual test-time adaptation
Alleviate catastrophic forgetting in changing target distributions
Reduce error accumulation in semantic segmentation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal projection subspace preserves source knowledge
Aggressive image masking mimics target dynamism
Online prior-knowledge aggregation reduces error accumulation
🔎 Similar Papers
No similar papers found.
J
Jinlong Li
University of Trento
D
Dong Zhao
University of Trento
Qi Zang
Qi Zang
Xidian University
深度学习,语义分割,无监督域适应
Z
Zequn Jie
Meituan Inc.
L
Lin Ma
Meituan Inc.
Nicu Sebe
Nicu Sebe
University of Trento
computer visionmultimedia