ChromouVQA: Benchmarking Vision-Language Models under Chromatic Camouflaged Images

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) struggle to disambiguate targets from backgrounds in low-contrast, high-clutter scenarios—particularly color-camouflaged images—leading to substantial performance degradation across nine visual question-answering tasks (e.g., recognition, counting, comparison, spatial reasoning). Method: We introduce the first large-scale, multi-task benchmark explicitly designed for color camouflage, built upon Ishihara plate extensions with novel augmentations: multi-geometric filling, chromatic separation control, and parametric modulation of density, occlusion, and rotation; all metadata are exhaustively annotated. We further propose a model-agnostic contrastive learning strategy coupled with a contour-alignment mechanism to explicitly reconstruct global shape representations. Contribution/Results: Human and model evaluations confirm the benchmark’s high difficulty. Our method significantly improves VLMs’ target identification accuracy and structural understanding under camouflage, establishing new baselines for robust visual reasoning in perceptually challenging conditions.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have advanced multimodal understanding, yet still struggle when targets are embedded in cluttered backgrounds requiring figure-ground segregation. To address this, we introduce ChromouVQA, a large-scale, multi-task benchmark based on Ishihara-style chromatic camouflaged images. We extend classic dot plates with multiple fill geometries and vary chromatic separation, density, size, occlusion, and rotation, recording full metadata for reproducibility. The benchmark covers nine vision-question-answering tasks, including recognition, counting, comparison, and spatial reasoning. Evaluations of humans and VLMs reveal large gaps, especially under subtle chromatic contrast or disruptive geometric fills. We also propose a model-agnostic contrastive recipe aligning silhouettes with their camouflaged renderings, improving recovery of global shapes. ChromouVQA provides a compact, controlled benchmark for reproducible evaluation and extension. Code and dataset are available at https://github.com/Chromou-VQA-Benchmark/Chromou-VQA.
Problem

Research questions and friction points this paper is trying to address.

Evaluates VLMs on chromatic camouflaged images
Benchmarks nine vision-question-answering tasks
Proposes a contrastive method to improve shape recovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces ChromouVQA benchmark with Ishihara-style camouflaged images
Proposes contrastive recipe aligning silhouettes with camouflaged renderings
Uses varied chromatic separation, density, occlusion for reproducible evaluation
🔎 Similar Papers
No similar papers found.
Y
Yunfei Zhang
Amazon
Y
Yizhuo He
Google
Y
Yuanxun Shao
MurcuryMind
Z
Zhengtao Yao
University of Southern California
Haoyan Xu
Haoyan Xu
University of Southern California
Machine Learning
J
Junhao Dong
Nanyang Technological University
Zhen Yao
Zhen Yao
Ph.D. student, Lehigh University
Multimodal PerceptionComputer VisionDeep Learning
Z
Zhikang Dong
Stony Brook University