Self-Training with Dynamic Weighting for Robust Gradual Domain Adaptation

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of inefficient knowledge transfer and incomplete intermediate-domain data in progressive domain adaptation (GDA), this paper proposes a dynamic weighted self-training framework. The method introduces a time-varying weighting mechanism with an adjustable hyperparameter ρ to adaptively balance the contributions of source-domain supervised loss and target-domain pseudo-label loss, enabling smooth and robust knowledge transfer across intermediate domains. Furthermore, a temporal evolution strategy dynamically schedules domain-specific learning intensity, enhancing model adaptability to progressive distribution shifts. Extensive experiments on Rotated MNIST, Colored MNIST, Portrait, and Cover Type datasets demonstrate substantial improvements over state-of-the-art baselines. Ablation studies confirm that the dynamic scheduling mechanism plays a critical role in mitigating domain shift and improving generalization performance.

Technology Category

Application Category

📝 Abstract
In this paper, we propose a new method called Self-Training with Dynamic Weighting (STDW), which aims to enhance robustness in Gradual Domain Adaptation (GDA) by addressing the challenge of smooth knowledge migration from the source to the target domain. Traditional GDA methods mitigate domain shift through intermediate domains and self-training but often suffer from inefficient knowledge migration or incomplete intermediate data. Our approach introduces a dynamic weighting mechanism that adaptively balances the loss contributions of the source and target domains during training. Specifically, we design an optimization framework governed by a time-varying hyperparameter $varrho$ (progressing from 0 to 1), which controls the strength of domain-specific learning and ensures stable adaptation. The method leverages self-training to generate pseudo-labels and optimizes a weighted objective function for iterative model updates, maintaining robustness across intermediate domains. Experiments on rotated MNIST, color-shifted MNIST, portrait datasets, and the Cover Type dataset demonstrate that STDW outperforms existing baselines. Ablation studies further validate the critical role of $varrho$'s dynamic scheduling in achieving progressive adaptation, confirming its effectiveness in reducing domain bias and improving generalization. This work provides both theoretical insights and a practical framework for robust gradual domain adaptation, with potential applications in dynamic real-world scenarios. The code is available at https://github.com/Dramwig/STDW.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robustness in gradual domain adaptation with dynamic weighting
Addressing inefficient knowledge migration across intermediate domains
Balancing source and target domain losses during progressive adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic weighting balances source and target domain losses
Time-varying hyperparameter controls domain-specific learning strength
Self-training with pseudo-labels optimizes weighted objective function
🔎 Similar Papers
No similar papers found.
Zixi Wang
Zixi Wang
University of Electronic Science and Technology of China, Chengdu, Sichuan, China
Y
Yushe Cao
Tsinghua University, Beijing, China
Y
Yubo Huang
Zhenguan AI Lab, Shenzhen, Guangdong, China; Southwest Jiaotong University, Chengdu, Sichuan, China
J
Jinzhu Wei
Shanghai University, Shanghai, China
J
Jingzehua Xu
Tsinghua University, Beijing, China
S
Shuai Zhang
New Jersey Institute of Technology, Newark, NJ, United States
Xin Lai
Xin Lai
ByteDance
Multimodal UnderstandingMultimodal Agent