🤖 AI Summary
Existing vision-language models (VLMs) struggle to disambiguate targets from backgrounds in low-contrast, high-clutter scenarios—particularly color-camouflaged images—leading to substantial performance degradation across nine visual question-answering tasks (e.g., recognition, counting, comparison, spatial reasoning).
Method: We introduce the first large-scale, multi-task benchmark explicitly designed for color camouflage, built upon Ishihara plate extensions with novel augmentations: multi-geometric filling, chromatic separation control, and parametric modulation of density, occlusion, and rotation; all metadata are exhaustively annotated. We further propose a model-agnostic contrastive learning strategy coupled with a contour-alignment mechanism to explicitly reconstruct global shape representations.
Contribution/Results: Human and model evaluations confirm the benchmark’s high difficulty. Our method significantly improves VLMs’ target identification accuracy and structural understanding under camouflage, establishing new baselines for robust visual reasoning in perceptually challenging conditions.
📝 Abstract
Vision-Language Models (VLMs) have advanced multimodal understanding, yet still struggle when targets are embedded in cluttered backgrounds requiring figure-ground segregation. To address this, we introduce ChromouVQA, a large-scale, multi-task benchmark based on Ishihara-style chromatic camouflaged images. We extend classic dot plates with multiple fill geometries and vary chromatic separation, density, size, occlusion, and rotation, recording full metadata for reproducibility. The benchmark covers nine vision-question-answering tasks, including recognition, counting, comparison, and spatial reasoning. Evaluations of humans and VLMs reveal large gaps, especially under subtle chromatic contrast or disruptive geometric fills. We also propose a model-agnostic contrastive recipe aligning silhouettes with their camouflaged renderings, improving recovery of global shapes. ChromouVQA provides a compact, controlled benchmark for reproducible evaluation and extension. Code and dataset are available at https://github.com/Chromou-VQA-Benchmark/Chromou-VQA.