De-Simplifying Pseudo Labels to Enhancing Domain Adaptive Object Detection

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In traffic-scene object detection, high annotation costs and the “easy-label bias” inherent in unsupervised domain adaptation (UDA) self-labeling methods severely limit performance—causing them to underperform compared to domain-alignment approaches. Method: We propose a de-simplification pseudo-labeling strategy that (i) constructs an instance-level memory bank for dynamic pseudo-label filtering and updating, (ii) incorporates adversarial sample augmentation to enhance hard-example learning, and (iii) designs an adaptive weighted loss to suppress dominance by easy samples. Contribution/Results: Our method significantly reduces the proportion of easy samples across four mainstream UDA detection benchmarks, improving mean Average Precision (mAP) by 4.2–7.8 percentage points over strong baselines. It is the first work to systematically identify and mitigate easy-label bias in self-labeled detectors, establishing a novel, efficient, and lightweight paradigm for UDA object detection.

Technology Category

Application Category

📝 Abstract
Despite its significant success, object detection in traffic and transportation scenarios requires time-consuming and laborious efforts in acquiring high-quality labeled data. Therefore, Unsupervised Domain Adaptation (UDA) for object detection has recently gained increasing research attention. UDA for object detection has been dominated by domain alignment methods, which achieve top performance. Recently, self-labeling methods have gained popularity due to their simplicity and efficiency. In this paper, we investigate the limitations that prevent self-labeling detectors from achieving commensurate performance with domain alignment methods. Specifically, we identify the high proportion of simple samples during training, i.e., the simple-label bias, as the central cause. We propose a novel approach called De-Simplifying Pseudo Labels (DeSimPL) to mitigate the issue. DeSimPL utilizes an instance-level memory bank to implement an innovative pseudo label updating strategy. Then, adversarial samples are introduced during training to enhance the proportion. Furthermore, we propose an adaptive weighted loss to avoid the model suffering from an abundance of false positive pseudo labels in the late training period. Experimental results demonstrate that DeSimPL effectively reduces the proportion of simple samples during training, leading to a significant performance improvement for self-labeling detectors. Extensive experiments conducted on four benchmarks validate our analysis and conclusions.
Problem

Research questions and friction points this paper is trying to address.

Addresses simple-label bias in self-labeling object detection
Proposes DeSimPL to balance pseudo label complexity
Enhances performance of domain adaptive object detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

De-Simplifying Pseudo Labels (DeSimPL) method
Instance-level memory bank strategy
Adaptive weighted loss function
🔎 Similar Papers
No similar papers found.