🤖 AI Summary
To address the high annotation cost for 3D object detection in real-world road scenarios and severe pseudo-label noise in cross-agent deployment—arising from viewpoint discrepancies and localization errors—this paper proposes a collaborative self-training framework that requires neither raw data nor ground-truth labels. Our method introduces three key innovations: (1) a distance-aware curriculum learning strategy that dynamically selects pseudo-labels based on consistency between neighboring agents’ predictions and the ego agent’s field of view; (2) a lightweight pseudo-label quality assessment and refinement module that jointly optimizes localization and classification confidence; and (3) a multi-source heterogeneous prediction alignment mechanism enabling sensor-agnostic cross-domain collaboration. Evaluated on a real-world cooperative driving dataset, our approach achieves performance close to full supervision using only a small number of annotated samples, while significantly improving generalization across sensors, detectors, and domains.
📝 Abstract
Accurate 3D object detection in real-world environments requires a huge amount of annotated data with high quality. Acquiring such data is tedious and expensive, and often needs repeated effort when a new sensor is adopted or when the detector is deployed in a new environment. We investigate a new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector. For example, when a self-driving car enters a new area, it may learn from other traffic participants whose detectors have been optimized for that area. This setting is label-efficient, sensor-agnostic, and communication-efficient: nearby units only need to share the predictions with the ego agent (e.g., car). Naively using the received predictions as ground-truths to train the detector for the ego car, however, leads to inferior performance. We systematically study the problem and identify viewpoint mismatches and mislocalization (due to synchronization and GPS errors) as the main causes, which unavoidably result in false positives, false negatives, and inaccurate pseudo labels. We propose a distance-based curriculum, first learning from closer units with similar viewpoints and subsequently improving the quality of other units' predictions via self-training. We further demonstrate that an effective pseudo label refinement module can be trained with a handful of annotated data, largely reducing the data quantity necessary to train an object detector. We validate our approach on the recently released real-world collaborative driving dataset, using reference cars' predictions as pseudo labels for the ego car. Extensive experiments including several scenarios (e.g., different sensors, detectors, and domains) demonstrate the effectiveness of our approach toward label-efficient learning of 3D perception from other units' predictions.