🤖 AI Summary
Large-scale labels automatically generated by multimodal foundation models (e.g., CLIP, LLaVA) lack ground-truth annotations; existing evaluation methods rely on limited metrics or small-sample inspections, hindering detection of latent errors—especially in open-vocabulary image segmentation.
Method: We propose the first human-in-the-loop visual analytics framework tailored for this task. It integrates multimodal output analysis, visual clustering-based diagnosis, interactive label correction, and an expert feedback loop to enable fine-grained quality assessment and iterative refinement.
Contribution/Results: By introducing visual analytics into the quality assurance pipeline for auto-generated labels, our approach overcomes the limitations of purely quantitative or sampling-based validation. Evaluated on two benchmark datasets, it significantly improves downstream task performance while enabling efficient identification of systematic labeling errors and enhancing model generalization.
📝 Abstract
The advances in multi-modal foundation models (FMs) (e.g., CLIP and LLaVA) have facilitated the auto-labeling of large-scale datasets, enhancing model performance in challenging downstream tasks such as open-vocabulary object detection and segmentation. However, the quality of FM-generated labels is less studied as existing approaches focus more on data quantity over quality. This is because validating large volumes of data without ground truth presents a considerable challenge in practice. Existing methods typically rely on limited metrics to identify problematic data, lacking a comprehensive perspective, or apply human validation to only a small data fraction, failing to address the full spectrum of potential issues. To overcome these challenges, we introduce VISTA, a visual analytics framework that improves data quality to enhance the performance of multi-modal models. Targeting the complex and demanding domain of open-vocabulary image segmentation, VISTA integrates multi-phased data validation strategies with human expertise, enabling humans to identify, understand, and correct hidden issues within FM-generated labels. Through detailed use cases on two benchmark datasets and expert reviews, we demonstrate VISTA's effectiveness from both quantitative and qualitative perspectives.