🤖 AI Summary
This work investigates the limitations of vision-language models (VLMs) in abstract visual pattern recognition—specifically, their ability to interpret natural language descriptions and reason about spatial relations, configurational composition, and cross-modal alignment within writing systems. To this end, we introduce GlyphPattern, a novel benchmark comprising 954 samples spanning 40 writing systems, designed to systematically evaluate VLMs’ understanding of writing-system-inspired abstract patterns. GlyphPattern employs multi-style rendering and cognitive-science-informed design principles, addressing a critical gap in abstract reasoning evaluation. We conduct zero-shot and few-shot evaluations on state-of-the-art VLMs (e.g., GPT-4o) and perform fine-grained error analysis to disentangle bottlenecks in visual encoding, linguistic comprehension, and generalization. Results reveal that top-performing models achieve only 55% accuracy—substantially below human performance—exposing fundamental deficits in spatial reference resolution, configurational reasoning, and cross-style generalization.
📝 Abstract
Vision-Language Models (VLMs) building upon the foundation of powerful large language models have made rapid progress in reasoning across visual and textual data. While VLMs perform well on vision tasks that they are trained on, our results highlight key challenges in abstract pattern recognition. We present GlyphPattern, a 954 item dataset that pairs 318 human-written descriptions of visual patterns from 40 writing systems with three visual presentation styles. GlyphPattern evaluates abstract pattern recognition in VLMs, requiring models to understand and judge natural language descriptions of visual patterns. GlyphPattern patterns are drawn from a large-scale cognitive science investigation of human writing systems; as a result, they are rich in spatial reference and compositionality. Our experiments show that GlyphPattern is challenging for state-of-the-art VLMs (GPT-4o achieves only 55% accuracy), with marginal gains from few-shot prompting. Our detailed error analysis reveals challenges at multiple levels, including visual processing, natural language understanding, and pattern generalization.