ConstStyle: Robust Domain Generalization with Unified Style Transformation

📅 2025-09-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Domain Generalization (DG) aims to improve model generalization to unseen test domains, yet performance degrades significantly when few source domains are available or domain shifts are large. This paper proposes a Unified Style Transformation (UST) mechanism that theoretically models cross-domain mappings and employs learnable style normalization to enforce consistency between training and inference stages, thereby achieving implicit distribution alignment between source and target domains. Crucially, UST is the first method to guarantee strict consistency of style transformation across both phases, enhancing robustness of domain-invariant features. Evaluated on multiple standard DG benchmarks, UST achieves substantial improvements using only 2–3 source domains—up to +19.82% absolute accuracy gain—outperforming existing state-of-the-art approaches by a significant margin.

Technology Category

Application Category

📝 Abstract
Deep neural networks often suffer performance drops when test data distribution differs from training data. Domain Generalization (DG) aims to address this by focusing on domain-invariant features or augmenting data for greater diversity. However, these methods often struggle with limited training domains or significant gaps between seen (training) and unseen (test) domains. To enhance DG robustness, we hypothesize that it is essential for the model to be trained on data from domains that closely resemble unseen test domains-an inherently difficult task due to the absence of prior knowledge about the unseen domains. Accordingly, we propose ConstStyle, a novel approach that leverages a unified domain to capture domain-invariant features and bridge the domain gap with theoretical analysis. During training, all samples are mapped onto this unified domain, optimized for seen domains. During testing, unseen domain samples are projected similarly before predictions. By aligning both training and testing data within this unified domain, ConstStyle effectively reduces the impact of domain shifts, even with large domain gaps or few seen domains. Extensive experiments demonstrate that ConstStyle consistently outperforms existing methods across diverse scenarios. Notably, when only a limited number of seen domains are available, ConstStyle can boost accuracy up to 19.82% compared to the next best approach.
Problem

Research questions and friction points this paper is trying to address.

Address performance drop from training-test distribution shift
Bridge domain gap between seen and unseen domains
Enhance robustness with limited training domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified domain mapping for domain-invariant features
Training and testing alignment in unified domain
Projection optimization for seen and unseen domains
🔎 Similar Papers
No similar papers found.
N
Nam Duong Tran
Institute for AI Innovation and Societal Impact, Hanoi University of Science and Technology
N
Nam Nguyen Phuong
Institute for AI Innovation and Societal Impact, Hanoi University of Science and Technology
Hieu H. Pham
Hieu H. Pham
College of Engineering & Computer Science, VinUni-Illinois Smart Health Center, VinUniversity
AIComputer VisionDeep LearningMedical Image AnalysisComputational Bioimaging
P
Phi Le Nguyen
Institute for AI Innovation and Societal Impact, Hanoi University of Science and Technology
My T. Thai
My T. Thai
Professor, University of Florida, IEEE Fellow
Explainable AISecurity and PrivacyNetwork ScienceOptimization