๐ค AI Summary
Existing open-vocabulary object detection methods suffer significant performance degradation under varying backgrounds due to a lack of intra-modality contextual consistency. To address this issue, this work proposes the Contextual Consistency Learning (CCL) frameworkโthe first systematic solution to this problem. CCL introduces Contextual Bootstrapping Data Generation (CBDG) to construct image pairs of the same object across diverse backgrounds and designs a Contextual Consistency Loss (CCLoss) to enforce invariance of feature representations against environmental changes. Departing from conventional paradigms that rely on contrastive learning and large-scale data, the proposed method substantially enhances cross-scenario robustness, achieving state-of-the-art results with improvements of 16.3 AP and 14.9 AP over prior methods on the OmniLabel and D3 benchmarks, respectively.
๐ Abstract
Recent advances in open-vocabulary object detection focus primarily on two aspects: scaling up datasets and leveraging contrastive learning to align language and vision modalities. However, these approaches often neglect internal consistency within a single modality, particularly when background or environmental changes occur. This lack of consistency leads to a performance drop because the model struggles to detect the same object in different scenes, which reveals a robustness gap. To address this issue, we introduce Contextual Consistency Learning (CCL), a novel framework that integrates two key strategies: Contextual Bootstrapped Data Generation (CBDG) and Contextual Consistency Loss (CCLoss). CBDG functions as a data generation mechanism, producing images that contain the same objects across diverse backgrounds. This is essential because existing datasets alone do not support our CCL framework. The CCLoss further enforces the invariance of object features despite environmental changes, thereby improving the model's robustness in different scenes. These strategies collectively form a unified framework for ensuring contextual consistency within the same modality. Our method achieves state-of-the-art performance, surpassing previous approaches by +16.3 AP on OmniLabel and +14.9 AP on D3. These results demonstrate the importance of enforcing intra-modal consistency, significantly enhancing model generalization in diverse environments. Our code is publicly available at: https://github.com/bozhao-li/CCL.