Can We Challenge Open-Vocabulary Object Detectors with Generated Content in Street Scenes?

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically evaluates the robustness boundaries of open-vocabulary object detectors (e.g., Grounding DINO) in safety-critical scenarios, focusing on their vulnerability to generative street-scene content. We propose an automated synthesis pipeline grounded in Stable Diffusion and semantic-driven sampling: WordNet hierarchy and ChatGPT-based semantic expansion enable diverse concept sampling, while inpainting enhances spatial plausibility. Experiments reveal strong positional dependency—detection performance is heavily governed by absolute and relative object locations rather than deep semantic understanding. Analysis on LostAndFound and NuImages uncovers, for the first time, reproducible systematic failure modes (e.g., consistent missed detections under specific layouts). Results demonstrate that synthetically generated data effectively expose current models’ generalization bottlenecks, establishing a new benchmark and diagnostic toolkit for robustness evaluation and model improvement.

Technology Category

Application Category

📝 Abstract
Open-vocabulary object detectors such as Grounding DINO are trained on vast and diverse data, achieving remarkable performance on challenging datasets. Due to that, it is unclear where to find their limitations, which is of major concern when using in safety-critical applications. Real-world data does not provide sufficient control, required for a rigorous evaluation of model generalization. In contrast, synthetically generated data allows to systematically explore the boundaries of model competence/generalization. In this work, we address two research questions: 1) Can we challenge open-vocabulary object detectors with generated image content? 2) Can we find systematic failure modes of those models? To address these questions, we design two automated pipelines using stable diffusion to inpaint unusual objects with high diversity in semantics, by sampling multiple substantives from WordNet and ChatGPT. On the synthetically generated data, we evaluate and compare multiple open-vocabulary object detectors as well as a classical object detector. The synthetic data is derived from two real-world datasets, namely LostAndFound, a challenging out-of-distribution (OOD) detection benchmark, and the NuImages dataset. Our results indicate that inpainting can challenge open-vocabulary object detectors in terms of overlooking objects. Additionally, we find a strong dependence of open-vocabulary models on object location, rather than on object semantics. This provides a systematic approach to challenge open-vocabulary models and gives valuable insights on how data could be acquired to effectively improve these models.
Problem

Research questions and friction points this paper is trying to address.

Evaluate open-vocabulary detectors' limits with synthetic data
Identify systematic failure modes in object detection models
Assess model dependence on object location vs semantics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Stable Diffusion for synthetic data generation
Inpaints unusual objects with WordNet and ChatGPT
Evaluates detectors on synthetic and real-world datasets
🔎 Similar Papers
No similar papers found.
Annika Mütze
Annika Mütze
University of Wuppertal
S
Sadia Ilyas
University of Wuppertal, Germany and Aptiv Services Deutschland GmbH, Wuppertal
C
Christian Dörpelkus
University of Wuppertal, Germany
Matthias Rottmann
Matthias Rottmann
Professor of Computer Science, Osnabrück University, Germany
Computer VisionDeep LearningSafe AIEfficient AINumerical Linear Algebra