Align and Distill: Unifying and Improving Domain Adaptive Object Detection

📅 2024-03-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing domain adaptive object detection (DAOD) research suffers from benchmark fragmentation, weak baselines, inconsistent evaluation protocols, and insufficient data diversity. This work systematically identifies three critical pitfalls in DAOD evaluation and proposes the ALDI framework—a unified benchmarking infrastructure—alongside a modern, standardized evaluation protocol and CFC-DAOD, the first real-world-diverse DAOD benchmark. We further introduce ALDI++, a novel method integrating feature- and region-level domain alignment, teacher-student knowledge distillation, and multi-source domain adaptation. ALDI++ achieves +3.5 AP₅₀ on Cityscapes→Foggy Cityscapes and +5.7 AP₅₀ on Sim10k→Cityscapes—outperforming all fair baselines—and yields +0.6 AP₅₀ on cross-domain tasks within CFC-DAOD. All components—including benchmarks, protocols, and code—are publicly released to advance standardization, fairness, and reproducibility in DAOD research.

Technology Category

Application Category

📝 Abstract
Object detectors often perform poorly on data that differs from their training set. Domain adaptive object detection (DAOD) methods have recently demonstrated strong results on addressing this challenge. Unfortunately, we identify systemic benchmarking pitfalls that call past results into question and hamper further progress: (a) Overestimation of performance due to underpowered baselines, (b) Inconsistent implementation practices preventing transparent comparisons of methods, and (c) Lack of generality due to outdated backbones and lack of diversity in benchmarks. We address these problems by introducing: (1) A unified benchmarking and implementation framework, Align and Distill (ALDI), enabling comparison of DAOD methods and supporting future development, (2) A fair and modern training and evaluation protocol for DAOD that addresses benchmarking pitfalls, (3) A new DAOD benchmark dataset, CFC-DAOD, enabling evaluation on diverse real-world data, and (4) A new method, ALDI++, that achieves state-of-the-art results by a large margin. ALDI++ outperforms the previous state-of-the-art by +3.5 AP50 on Cityscapes to Foggy Cityscapes, +5.7 AP50 on Sim10k to Cityscapes (where ours is the only method to outperform a fair baseline), and +0.6 AP50 on CFC Kenai to Channel. Our framework, dataset, and state-of-the-art method offer a critical reset for DAOD and provide a strong foundation for future research. Code and data are available: https://github.com/justinkay/aldi and https://github.com/visipedia/caltech-fish-counting.
Problem

Research questions and friction points this paper is trying to address.

Addresses poor object detection performance on non-training data.
Identifies and resolves benchmarking pitfalls in domain adaptation.
Introduces a unified framework and dataset for improved DAOD evaluation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified benchmarking framework for DAOD methods
Modern training protocol addressing benchmarking pitfalls
New DAOD benchmark dataset for diverse real-world data
🔎 Similar Papers
No similar papers found.