Boosting Domain Generalized and Adaptive Detection with Diffusion Models: Fitness, Generalization, and Transferability

📅 2025-06-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation in cross-domain object detection caused by domain shift, this paper proposes a unified framework for domain generalization and adaptation based on diffusion models. The method jointly aligns source and unlabeled target domains at both feature-level and object-level. Key contributions include: (1) a single-step diffusion feature extraction mechanism that yields high-quality intermediate features via one forward pass, reducing inference cost by 75%; (2) an object-centric box-mask prompting auxiliary branch integrating class-aware prompt feature extraction and consistency regularization; and (3) collaborative alignment across domains without requiring target-domain annotations. Evaluated on three domain adaptation (DA) and five domain generalization (DG) benchmarks, the approach achieves state-of-the-art or competitive performance. Notably, it significantly outperforms existing methods under large domain shifts and low-data regimes on the COCO generalization benchmark, demonstrating superior efficiency, generalization capability, and robustness.

Technology Category

Application Category

📝 Abstract
Detectors often suffer from performance drop due to domain gap between training and testing data. Recent methods explore diffusion models applied to domain generalization (DG) and adaptation (DA) tasks, but still struggle with large inference costs and have not yet fully leveraged the capabilities of diffusion models. We propose to tackle these problems by extracting intermediate features from a single-step diffusion process, improving feature collection and fusion to reduce inference time by 75% while enhancing performance on source domains (i.e., Fitness). Then, we construct an object-centered auxiliary branch by applying box-masked images with class prompts to extract robust and domain-invariant features that focus on object. We also apply consistency loss to align the auxiliary and ordinary branch, balancing fitness and generalization while preventing overfitting and improving performance on target domains (i.e., Generalization). Furthermore, within a unified framework, standard detectors are guided by diffusion detectors through feature-level and object-level alignment on source domains (for DG) and unlabeled target domains (for DA), thereby improving cross-domain detection performance (i.e., Transferability). Our method achieves competitive results on 3 DA benchmarks and 5 DG benchmarks. Additionally, experiments on COCO generalization benchmark demonstrate that our method maintains significant advantages and show remarkable efficiency in large domain shifts and low-data scenarios. Our work shows the superiority of applying diffusion models to domain generalized and adaptive detection tasks and offers valuable insights for visual perception tasks across diverse domains. The code is available at href{https://github.com/heboyong/Fitness-Generalization-Transferability}{Fitness-Generalization-Transferability}.
Problem

Research questions and friction points this paper is trying to address.

Reduce inference time while enhancing detection performance
Extract robust domain-invariant features for object detection
Improve cross-domain detection via feature and object alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts features from single-step diffusion for efficiency
Uses box-masked images for domain-invariant object features
Aligns diffusion and standard detectors for cross-domain performance
🔎 Similar Papers
No similar papers found.
Boyong He
Boyong He
Xiamen University
CV
Yuxiang Ji
Yuxiang Ji
Xiamen University
Z
Zhuoyue Tan
Institute of Artificial Intelligence, Xiamen University
L
Liaoni Wu
Institute of Artificial Intelligence, Xiamen University; School of Aerospace Engineering, Xiamen University