AutoVDC: Automated Vision Data Cleaning Using Vision-Language Models

📅 2025-07-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prevalence of annotation errors in autonomous driving vision datasets and the high cost of manual verification, this paper proposes the first automated data cleaning framework based on vision-language models (VLMs). Methodologically, it introduces a prompt engineering strategy tailored to object detection semantic consistency, systematically evaluates performance disparities across multiple VLMs for erroneous annotation detection, and investigates the impact of fine-tuning on detection accuracy. Controlled error-injection experiments on KITTI and nuImages demonstrate that the framework achieves high precision and recall in localizing annotation errors, significantly improving the performance of object detectors trained on cleaned data. The core contributions are: (1) the first systematic application of VLMs to vision data cleaning; (2) strong generalizability and scalability; and (3) an efficient, reliable automation solution for quality assurance in large-scale autonomous driving dataset curation.

Technology Category

Application Category

📝 Abstract
Training of autonomous driving systems requires extensive datasets with precise annotations to attain robust performance. Human annotations suffer from imperfections, and multiple iterations are often needed to produce high-quality datasets. However, manually reviewing large datasets is laborious and expensive. In this paper, we introduce AutoVDC (Automated Vision Data Cleaning) framework and investigate the utilization of Vision-Language Models (VLMs) to automatically identify erroneous annotations in vision datasets, thereby enabling users to eliminate these errors and enhance data quality. We validate our approach using the KITTI and nuImages datasets, which contain object detection benchmarks for autonomous driving. To test the effectiveness of AutoVDC, we create dataset variants with intentionally injected erroneous annotations and observe the error detection rate of our approach. Additionally, we compare the detection rates using different VLMs and explore the impact of VLM fine-tuning on our pipeline. The results demonstrate our method's high performance in error detection and data cleaning experiments, indicating its potential to significantly improve the reliability and accuracy of large-scale production datasets in autonomous driving.
Problem

Research questions and friction points this paper is trying to address.

Automating detection of erroneous annotations in vision datasets
Reducing labor and cost in cleaning large autonomous driving datasets
Improving data quality using Vision-Language Models (VLMs)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated error detection using Vision-Language Models
Fine-tuning VLMs for improved annotation accuracy
Validation on KITTI and nuImages datasets
🔎 Similar Papers
No similar papers found.