Enhancing Domain Adaptation through Prompt Gradient Alignment

📅 2024-06-13
🏛️ Neural Information Processing Systems
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing unsupervised domain adaptation (UDA) methods overemphasize domain invariance, often compromising feature discriminability. This paper formulates UDA as a multi-objective optimization problem, where losses from individual source domains serve as distinct objectives. We propose a gradient consensus mechanism that jointly optimizes gradients across objectives via directional alignment and norm regularization—balancing discriminability and generalizability. To our knowledge, this is the first work to formalize UDA as a multi-objective gradient alignment problem. We further design a differentiable prompt-coordinated optimization framework, unifying support for both single-source and multi-source UDA. Leveraging vision-language foundation models, our approach integrates prompt learning with gradient regularization, achieving significant improvements over state-of-the-art vision-language adaptation methods on multiple UDA benchmarks. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Prior Unsupervised Domain Adaptation (UDA) methods often aim to train a domain-invariant feature extractor, which may hinder the model from learning sufficiently discriminative features. To tackle this, a line of works based on prompt learning leverages the power of large-scale pre-trained vision-language models to learn both domain-invariant and specific features through a set of domain-agnostic and domain-specific learnable prompts. Those studies typically enforce invariant constraints on representation, output, or prompt space to learn such prompts. In contrast, we cast UDA as a multiple-objective optimization problem in which each objective is represented by a domain loss. Under this new framework, we propose to align per-objective gradients to foster consensus between them. Additionally, to prevent potential overfitting when fine-tuning this deep learning architecture, we penalize the norm of these gradients. To achieve these goals, we devise a practical gradient update procedure that can work under both single-source and multi-source UDA. Empirically, our method consistently outperforms other vision-language model adaptation methods. The implementation is available at https://github.com/VietHoang1512/PGA.
Problem

Research questions and friction points this paper is trying to address.

Improve domain adaptation via gradient alignment
Address overfitting in fine-tuning deep models
Enhance both domain-invariant and specific features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages prompt learning for domain adaptation
Aligns per-objective gradients for consensus
Penalizes gradient norms to prevent overfitting
🔎 Similar Papers
No similar papers found.