VIPER Strike: Defeating Visual Reasoning CAPTCHAs via Structured Vision-Language Inference

📅 2026-01-10
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of effectively breaking visual reasoning CAPTCHAs (VRCs), which exhibit high layout diversity and demand fine-grained perceptual understanding that general-purpose methods struggle to handle. To this end, the authors propose ViPer, a unified attack framework that integrates structured multi-object visual parsing with adaptive large language model (LLM) reasoning through a modular pipeline for visual layout parsing, attribute-semantic alignment, and target coordinate inference. ViPer is compatible with multiple mainstream LLMs—including GPT, Grok, DeepSeek, and Kimi—and achieves an average success rate of 93.2% across six major VRC platforms such as VTT and Geetest, substantially outperforming existing approaches while maintaining over 90% success across different LLM backends. Additionally, the study introduces Template Space Randomization (TSR) as a defense strategy to enhance human–machine distinguishability.

Technology Category

Application Category

📝 Abstract
Visual Reasoning CAPTCHAs (VRCs) combine visual scenes with natural-language queries that demand compositional inference over objects, attributes, and spatial relations. They are increasingly deployed as a primary defense against automated bots. Existing solvers fall into two paradigms: vision-centric, which rely on template-specific detectors but fail on novel layouts, and reasoning-centric, which leverage LLMs but struggle with fine-grained visual perception. Both lack the generality needed to handle heterogeneous VRC deployments. We present ViPer, a unified attack framework that integrates structured multi-object visual perception with adaptive LLM-based reasoning. ViPer parses visual layouts, grounds attributes to question semantics, and infers target coordinates within a modular pipeline. Evaluated on six major VRC providers (VTT, Geetest, NetEase, Dingxiang, Shumei, Xiaodun), ViPer achieves up to 93.2% success, approaching human-level performance across multiple benchmarks. Compared to prior solvers, GraphNet (83.2%), Oedipus (65.8%), and the Holistic approach (89.5%), ViPer consistently outperforms all baselines. The framework further maintains robustness across alternative LLM backbones (GPT, Grok, DeepSeek, Kimi), sustaining accuracy above 90%. To anticipate defense, we further introduce Template-Space Randomization (TSR), a lightweight strategy that perturbs linguistic templates without altering task semantics. TSR measurably reduces solver (i.e., attacker) performance. Our proposed design suggests directions for human-solvable but machine-resistant CAPTCHAs.
Problem

Research questions and friction points this paper is trying to address.

Visual Reasoning CAPTCHAs
compositional inference
visual perception
generalization
bot defense
Innovation

Methods, ideas, or system contributions that make the work stand out.

structured vision-language inference
visual reasoning CAPTCHAs
modular perception-reasoning pipeline
template-space randomization
LLM-based visual grounding
🔎 Similar Papers
No similar papers found.