CAPTURE: A Benchmark and Evaluation for LVLMs in CAPTCHA Resolving

πŸ“… 2025-12-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing CAPTCHA benchmarks lack LVLM-specific design, suffer from limited coverage of CAPTCHA types, and employ annotation schemes misaligned with LVLM capabilities. To address this, we introduce CAPTUREβ€”the first dedicated evaluation benchmark for large vision-language models (LVLMs). CAPTURE comprises 25 fine-grained CAPTCHA subtypes across four major categories, sourced from 31 real-world service providers. It features a taxonomy-driven classification framework and an LVLM-adapted, fine-grained labeling scheme. Furthermore, we propose a multi-dimensional evaluation protocol explicitly designed to accommodate LVLM output characteristics (e.g., free-form text generation, multimodal reasoning). Extensive experiments reveal that state-of-the-art LVLMs achieve only 31.7% average accuracy on CAPTURE, exposing critical weaknesses in interference-robust text recognition and compositional reasoning. CAPTURE fills a fundamental gap in LVLM-specific security evaluation, providing a reproducible, systematic, and quantitative tool for model diagnostics and robustness research.

Technology Category

Application Category

πŸ“ Abstract
Benefiting from strong and efficient multi-modal alignment strategies, Large Visual Language Models (LVLMs) are able to simulate human visual and reasoning capabilities, such as solving CAPTCHAs. However, existing benchmarks based on visual CAPTCHAs still face limitations. Previous studies, when designing benchmarks and datasets, customized them according to their research objectives. Consequently, these benchmarks cannot comprehensively cover all CAPTCHA types. Notably, there is a dearth of dedicated benchmarks for LVLMs. To address this problem, we introduce a novel CAPTCHA benchmark for the first time, named CAPTURE CAPTCHA for Testing Under Real-world Experiments, specifically for LVLMs. Our benchmark encompasses 4 main CAPTCHA types and 25 sub-types from 31 vendors. The diversity enables a multi-dimensional and thorough evaluation of LVLM performance. CAPTURE features extensive class variety, large-scale data, and unique LVLM-tailored labels, filling the gaps in previous research in terms of data comprehensiveness and labeling pertinence. When evaluated by this benchmark, current LVLMs demonstrate poor performance in solving CAPTCHAs.
Problem

Research questions and friction points this paper is trying to address.

Develops a benchmark for LVLMs to solve diverse CAPTCHA types
Addresses the lack of comprehensive CAPTCHA datasets for LVLMs
Evaluates LVLM performance across multiple CAPTCHA categories and vendors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces first LVLM-specific CAPTCHA benchmark CAPTURE
Covers 4 main types and 25 subtypes from 31 vendors
Provides comprehensive evaluation with tailored labels and diversity
πŸ”Ž Similar Papers
No similar papers found.
Jianyi Zhang
Jianyi Zhang
Research Scientist@Google Deepmind, PI@Duke University
LLMsGenerative AITrustworthy AI
Z
Ziyin Zhou
Beijing Electric Science and Technology Institute
X
Xu Ji
Beijing Electric Science and Technology Institute
S
Shizhao Liu
Beijing Electric Science and Technology Institute
Z
Zhangchi Zhao
Beijing Electric Science and Technology Institute