π€ AI Summary
Existing CAPTCHA benchmarks lack LVLM-specific design, suffer from limited coverage of CAPTCHA types, and employ annotation schemes misaligned with LVLM capabilities. To address this, we introduce CAPTUREβthe first dedicated evaluation benchmark for large vision-language models (LVLMs). CAPTURE comprises 25 fine-grained CAPTCHA subtypes across four major categories, sourced from 31 real-world service providers. It features a taxonomy-driven classification framework and an LVLM-adapted, fine-grained labeling scheme. Furthermore, we propose a multi-dimensional evaluation protocol explicitly designed to accommodate LVLM output characteristics (e.g., free-form text generation, multimodal reasoning). Extensive experiments reveal that state-of-the-art LVLMs achieve only 31.7% average accuracy on CAPTURE, exposing critical weaknesses in interference-robust text recognition and compositional reasoning. CAPTURE fills a fundamental gap in LVLM-specific security evaluation, providing a reproducible, systematic, and quantitative tool for model diagnostics and robustness research.
π Abstract
Benefiting from strong and efficient multi-modal alignment strategies, Large Visual Language Models (LVLMs) are able to simulate human visual and reasoning capabilities, such as solving CAPTCHAs. However, existing benchmarks based on visual CAPTCHAs still face limitations. Previous studies, when designing benchmarks and datasets, customized them according to their research objectives. Consequently, these benchmarks cannot comprehensively cover all CAPTCHA types. Notably, there is a dearth of dedicated benchmarks for LVLMs. To address this problem, we introduce a novel CAPTCHA benchmark for the first time, named CAPTURE CAPTCHA for Testing Under Real-world Experiments, specifically for LVLMs. Our benchmark encompasses 4 main CAPTCHA types and 25 sub-types from 31 vendors. The diversity enables a multi-dimensional and thorough evaluation of LVLM performance. CAPTURE features extensive class variety, large-scale data, and unique LVLM-tailored labels, filling the gaps in previous research in terms of data comprehensiveness and labeling pertinence. When evaluated by this benchmark, current LVLMs demonstrate poor performance in solving CAPTCHAs.