🤖 AI Summary
Single-domain generalized object detection (S-DGOD) aims to train a detector using data from only one source domain while ensuring robust generalization to multiple unseen target domains (e.g., varying weather or illumination conditions). However, existing approaches rely on coarse-grained vision-language knowledge, limiting their ability to learn domain-invariant region-level features. To address this, we propose a fine-grained cross-modal vision-language interaction framework. Our method introduces a cross-modal region-aware feature interaction mechanism and a cross-domain proposal refinement and mixing strategy—enabling, for the first time, text-image fine-grained alignment to drive region-level generalizable representation learning. It integrates vision-language model (VLM) fine-tuning, cross-modal attention, region-level contrastive learning, and dynamic proposal alignment with mixing-based augmentation. On Cityscapes-C and DWD benchmarks, our approach achieves +8.8% and +7.9% improvements in mean per-class AP (mPC), respectively, establishing new state-of-the-art performance.
📝 Abstract
Single-Domain Generalized Object Detection~(S-DGOD) aims to train an object detector on a single source domain while generalizing well to diverse unseen target domains, making it suitable for multimedia applications that involve various domain shifts, such as intelligent video surveillance and VR/AR technologies. With the success of large-scale Vision-Language Models, recent S-DGOD approaches exploit pre-trained vision-language knowledge to guide invariant feature learning across visual domains. However, the utilized knowledge remains at a coarse-grained level~(e.g., the textual description of adverse weather paired with the image) and serves as an implicit regularization for guidance, struggling to learn accurate region- and object-level features in varying domains. In this work, we propose a new cross-modal feature learning method, which can capture generalized and discriminative regional features for S-DGOD tasks. The core of our method is the mechanism of Cross-modal and Region-aware Feature Interaction, which simultaneously learns both inter-modal and intra-modal regional invariance through dynamic interactions between fine-grained textual and visual features. Moreover, we design a simple but effective strategy called Cross-domain Proposal Refining and Mixing, which aligns the position of region proposals across multiple domains and diversifies them, enhancing the localization ability of detectors in unseen scenarios. Our method achieves new state-of-the-art results on S-DGOD benchmark datasets, with improvements of +8.8%~mPC on Cityscapes-C and +7.9%~mPC on DWD over baselines, demonstrating its efficacy.