Unsupervised Robust Domain Adaptation: Paradigm, Theory and Algorithm

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Unsupervised domain adaptation (UDA) suffers from a critical lack of adversarial robustness, and vanilla adversarial training (e.g., VAT) fails under UDA due to unaddressed interactions between domain shift and adversarial perturbations. Method: We establish the first generalization error upper bound that jointly accounts for domain shift and adversarial perturbations, revealing the theoretical root cause of this failure. Based on this analysis, we propose Unsupervised Robust Domain Adaptation (URDA)—a novel paradigm—and design DART, a lightweight, decoupled adversarial robustness training algorithm. DART employs two-stage decoupled knowledge distillation and instantaneous robustification, optimizing transferability and robustness synergistically without modifying the backbone architecture. Results: Evaluated on four standard benchmarks, DART significantly improves adversarial robustness both under attacks and in clean conditions, while maintaining state-of-the-art domain adaptation accuracy—validating the theoretical soundness and practical efficacy of URDA.

Technology Category

Application Category

📝 Abstract
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a label-rich source domain to an unlabeled target domain by addressing domain shifts. Most UDA approaches emphasize transfer ability, but often overlook robustness against adversarial attacks. Although vanilla adversarial training (VAT) improves the robustness of deep neural networks, it has little effect on UDA. This paper focuses on answering three key questions: 1) Why does VAT, known for its defensive effectiveness, fail in the UDA paradigm? 2) What is the generalization bound theory under attacks and how does it evolve from classical UDA theory? 3) How can we implement a robustification training procedure without complex modifications? Specifically, we explore and reveal the inherent entanglement challenge in general UDA+VAT paradigm, and propose an unsupervised robust domain adaptation (URDA) paradigm. We further derive the generalization bound theory of the URDA paradigm so that it can resist adversarial noise and domain shift. To the best of our knowledge, this is the first time to establish the URDA paradigm and theory. We further introduce a simple, novel yet effective URDA algorithm called Disentangled Adversarial Robustness Training (DART), a two-step training procedure that ensures both transferability and robustness. DART first pre-trains an arbitrary UDA model, and then applies an instantaneous robustification post-training step via disentangled distillation.Experiments on four benchmark datasets with/without attacks show that DART effectively enhances robustness while maintaining domain adaptability, and validate the URDA paradigm and theory.
Problem

Research questions and friction points this paper is trying to address.

Addressing adversarial vulnerability in unsupervised domain adaptation methods
Developing generalization theory for domain adaptation under adversarial attacks
Creating robust training procedures without complex architectural modifications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes unsupervised robust domain adaptation paradigm
Introduces disentangled adversarial robustness training algorithm
Establishes generalization bound theory for adversarial noise
🔎 Similar Papers
No similar papers found.
Fuxiang Huang
Fuxiang Huang
The Hong Kong University of Science and Technology (HKUST)
Multimodal LearningFoundation model for Vertical DomainDomain Adaptation
X
Xiaowei Fu
Chongqing Key Laboratory of Bio-perception and Multi-modal Intelligent Information Processing, Chongqing University, Chongqing 400044, China
S
Shiyu Ye
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
Lina Ma
Lina Ma
School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China
W
Wen Li
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
X
Xinbo Gao
Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
D
David Zhang
School of Science and Engineering, Chinese University of Hong Kong (Shenzhen), Shenzhen 518172, China
L
Lei Zhang
Chongqing Key Laboratory of Bio-perception and Multi-modal Intelligent Information Processing, Chongqing University, Chongqing 400044, China