🤖 AI Summary
This work addresses the limited generalization of deep learning models in synthetic-to-real (Sim2Real) transfer due to domain shift by proposing StyleMixDG, a lightweight, model-agnostic data augmentation approach. StyleMixDG leverages a large-scale, diverse pool of artistic styles to perform randomization through style transfer alone, requiring no architectural modifications or additional loss terms. The study systematically investigates how style diversity, texture complexity, and style source influence domain generalization, resolving three key design trade-offs in style-based augmentation and establishing a new paradigm: that diverse artistic styles alone can substantially enhance generalization performance. Evaluated on the GTAV→{BDD100k, Cityscapes, Mapillary Vistas} benchmark, StyleMixDG consistently outperforms strong existing baselines, validating the efficacy of the proposed principles.
📝 Abstract
Deep learning models for computer vision often suffer from poor generalization when deployed in real-world settings, especially when trained on synthetic data due to the well-known Sim2Real gap. Despite the growing popularity of style transfer as a data augmentation strategy for domain generalization, the literature contains unresolved contradictions regarding three key design axes: the diversity of the style pool, the role of texture complexity, and the choice of style source. We present a systematic empirical study that isolates and evaluates each of these factors for driving scene understanding, resolving inconsistencies in prior work. Our findings show that (i) expanding the style pool yields larger gains than repeated augmentation with few styles, (ii) texture complexity has no significant effect when the pool is sufficiently large, and (iii) diverse artistic styles outperform domain-aligned alternatives. Guided by these insights, we derive StyleMixDG (Style-Mixing for Domain Generalization), a lightweight, model-agnostic augmentation recipe that requires no architectural modifications or additional losses. Evaluated on the GTAV $\rightarrow$ {BDD100k, Cityscapes, Mapillary Vistas} benchmark, StyleMixDG demonstrates consistent improvements over strong baselines, confirming that the empirically identified design principles translate into practical gains. The code will be released on GitHub.