Stein Discrepancy for Unsupervised Domain Adaptation

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In unsupervised domain adaptation (UDA) with abundant source but scarce target samples, existing methods based on Wasserstein distance or maximum mean discrepancy (MMD) suffer from degraded performance due to their reliance on sufficient target data. Method: This paper introduces Stein discrepancy—the first such application in UDA—leveraging the score function of the target distribution to construct an asymmetric, low-sample-sensitive domain alignment mechanism that alleviates strong dependence on target sample size. We theoretically derive a generalization error upper bound and propose both non-kernelized and kernelized Stein discrepancy optimization frameworks, integrated with differentiable score function estimation for deep feature alignment. Results: Extensive experiments on multiple UDA benchmarks under low-target-sample settings demonstrate consistent superiority over Wasserstein distance, MMD, and other mainstream approaches, achieving average accuracy gains of 2.1–5.7%.

Technology Category

Application Category

📝 Abstract
Unsupervised domain adaptation (UDA) leverages information from a labeled source dataset to improve accuracy on a related but unlabeled target dataset. A common approach to UDA is aligning representations from the source and target domains by minimizing the distance between their data distributions. Previous methods have employed distances such as Wasserstein distance and maximum mean discrepancy. However, these approaches are less effective when the target data is significantly scarcer than the source data. Stein discrepancy is an asymmetric distance between distributions that relies on one distribution only through its score function. In this paper, we propose a novel ac{uda} method that uses Stein discrepancy to measure the distance between source and target domains. We develop a learning framework using both non-kernelized and kernelized Stein discrepancy. Theoretically, we derive an upper bound for the generalization error. Numerical experiments show that our method outperforms existing methods using other domain discrepancy measures when only small amounts of target data are available.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised domain adaptation with Stein discrepancy
Addressing scarcity of target data
Improving accuracy on unlabeled datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Stein discrepancy measure
Develops non-kernelized and kernelized framework
Improves accuracy with scarce target data
🔎 Similar Papers
No similar papers found.
A
Anneke von Seeger
School of Mathematics, University of Minnesota, Minneapolis, MN, USA
Dongmian Zou
Dongmian Zou
Duke Kunshan University
applied harmonic analysismachine learning
Gilad Lerman
Gilad Lerman
University of Minnesota