Reasoning under Vision: Understanding Visual-Spatial Cognition in Vision-Language Models for CAPTCHA

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
State-of-the-art commercial vision-language models (e.g., GPT, Claude, Gemini) exhibit poor performance—only ~21.9% accuracy—on challenging real-world CAPTCHA spatial reasoning tasks, revealing a fundamental deficiency in explicit, stepwise reasoning. Method: We introduce CAPTCHA-X, the first realistic CAPTCHA benchmark featuring fine-grained, human-annotated reasoning chains, along with five novel process-oriented evaluation metrics. We further propose a general reasoning-augmentation framework grounded in an agent architecture, integrating coordinate-based visual grounding, structured chain-of-thought prompting, and multi-stage verification. Contribution/Results: Our method achieves 83.9% average accuracy across five complex CAPTCHA categories—surpassing baselines by 62 percentage points—and provides the first systematic empirical validation that explicit spatial reasoning critically enhances the cognitive capabilities of vision-language models.

Technology Category

Application Category

📝 Abstract
CAPTCHA, originally designed to distinguish humans from robots, has evolved into a real-world benchmark for assessing the spatial reasoning capabilities of vision-language models. In this work, we first show that step-by-step reasoning is crucial for vision-language models (VLMs) to solve CAPTCHAs, which represent high-difficulty spatial reasoning tasks, and that current commercial vision-language models still struggle with such reasoning. In particular, we observe that most commercial VLMs (e.g., Gemini, Claude, GPT, etc.) fail to effectively solve CAPTCHAs and thus achieve low accuracy (around 21.9 percent). However, our findings indicate that requiring the model to perform step-by-step reasoning before generating the final coordinates can significantly enhance its solving accuracy, underscoring the severity of the gap. To systematically study this issue, we introduce CAPTCHA-X, the first real-world CAPTCHA benchmark with reasoning, covering seven categories of CAPTCHAs (such as Gobang, hCaptcha, etc.) with step-by-step action solutions and grounding annotations. We further define five reasoning-oriented metrics that enable a comprehensive evaluation of models reasoning capabilities. To validate the effectiveness of reasoning, we also propose a general agentic VLM-based framework that incorporates the models inherent reasoning abilities. Our method achieves state-of-the-art performance across five high-difficulty CAPTCHA types, with an average solving accuracy of 83.9 percent, substantially surpassing existing baselines. These results reveal the limitations of current models and highlight the importance of reasoning in advancing visual-spatial challenges in the future.
Problem

Research questions and friction points this paper is trying to address.

Evaluating spatial reasoning capabilities in vision-language models
Addressing low CAPTCHA solving accuracy in commercial VLMs
Developing step-by-step reasoning methods for visual-spatial tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introducing CAPTCHA-X benchmark with reasoning annotations
Proposing agentic VLM framework using step-by-step reasoning
Achieving high accuracy on spatial tasks with reasoning metrics
🔎 Similar Papers
No similar papers found.
Python Song
Python Song
Columbia University
Robot LearningProactive LearningNeuroscienceFoundation Model
L
Luke Tenyi Chang
Department of Computer Science, Columbia University
Yun-Yun Tsai
Yun-Yun Tsai
Ph.D. student at Computer Science, Columbia University
Adversarial Machine LearningAI SecurityModel RobustnessTransfer learning
P
Penghui Li
Department of Computer Science, Columbia University
J
Junfeng Yang
Department of Computer Science, Columbia University