When Visual Evidence is Ambiguous: Pareidolia as a Diagnostic Probe for Vision Models

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the semantic interpretation mechanisms, uncertainty, and bias behaviors of vision models under ambiguous inputs—specifically face hallucinations. It introduces face hallucination as a controllable probe to establish a unified representational diagnostic framework, systematically evaluating six prominent model families (CLIP, LLaVA, ViT, YOLOv8, and RetinaFace) across detection, localization, uncertainty quantification, and category/emotion bias. The work identifies three distinct interpretation mechanisms and demonstrates that representational architecture—not decision thresholds—primarily governs uncertainty and bias. Furthermore, it shows these factors are decouplable: vision-language models (VLMs) exhibit semantic over-activation, Vision Transformers (ViTs) display high uncertainty avoidance, and detectors rely on conservative priors to suppress hallucinations. These findings offer a novel pathway toward enhancing the semantic robustness of visual systems.

Technology Category

Application Category

📝 Abstract
When visual evidence is ambiguous, vision models must decide whether to interpret face-like patterns as meaningful. Face pareidolia, the perception of faces in non-face objects, provides a controlled probe of this behavior. We introduce a representation-level diagnostic framework that analyzes detection, localization, uncertainty, and bias across class, difficulty, and emotion in face pareidolia images. Under a unified protocol, we evaluate six models spanning four representational regimes: vision-language models (VLMs; CLIP-B/32, CLIP-L/14, LLaVA-1.5-7B), pure vision classification (ViT), general object detection (YOLOv8), and face detection (RetinaFace). Our analysis reveals three mechanisms of interpretation under ambiguity. VLMs exhibit semantic overactivation, systematically pulling ambiguous non-human regions toward the Human concept, with LLaVA-1.5-7B producing the strongest and most confident over-calls, especially for negative emotions. ViT instead follows an uncertainty-as-abstention strategy, remaining diffuse yet largely unbiased. Detection-based models achieve low bias through conservative priors that suppress pareidolia responses even when localization is controlled. These results show that behavior under ambiguity is governed more by representational choices than score thresholds, and that uncertainty and bias are decoupled: low uncertainty can signal either safe suppression, as in detectors, or extreme over-interpretation, as in VLMs. Pareidolia therefore provides a compact diagnostic and a source of ambiguity-aware hard negatives for probing and improving the semantic robustness of vision-language systems. Code will be released upon publication.
Problem

Research questions and friction points this paper is trying to address.

pareidolia
visual ambiguity
vision models
semantic robustness
face perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

pareidolia
vision-language models
ambiguity
uncertainty
semantic robustness
🔎 Similar Papers
No similar papers found.