🤖 AI Summary
Existing vision-language models (VLMs) rely heavily on large-scale supervised training while neglecting the role of visual enhancement during inference, resulting in limited robustness on fundamental perception tasks—particularly OCR—as well as on out-of-distribution and adversarial examples. To address this, we propose VACoT, a novel inference-time framework that for the first time integrates large language model–guided chain-of-thought reasoning to dynamically select optimal post-processing enhancements (e.g., denoising). VACoT combines a local-global transformation strategy with a conditional reward-driven proxy reinforcement learning mechanism to enable precise and parsimonious enhancement decisions—without requiring additional fine-tuning. Extensive experiments demonstrate state-of-the-art performance across 13 diverse benchmarks. Furthermore, we introduce AdvOCR, a new adversarial OCR evaluation benchmark, which validates VACoT’s strong generalization under challenging, perturbed conditions.
📝 Abstract
While visual data augmentation remains a cornerstone for training robust vision models, it has received limited attention in visual language models (VLMs), which predominantly rely on large-scale real data acquisition or synthetic diversity. Consequently, they may struggle with basic perception tasks that conventional models handle reliably. Given the substantial cost of pre-training and fine-tuning VLMs, continue training on augmented data yields limited and diminishing returns. In this paper, we present Visual Augmentation Chain-of-Thought (VACoT), a framework that dynamically invokes image augmentations during model inference. By incorporating post-hoc transformations such as denoising, VACoT substantially improves robustness on challenging and out-of-distribution inputs, especially in OCR-related adversarial scenarios. Distinct from prior approaches limited to local cropping, VACoT integrates a structured collection of general visual augmentations, broadening the query image views while reducing training complexity and computational overhead with efficient agentic reinforcement learning. We propose a conditional reward scheme that encourages necessary augmentation while penalizing verbose responses, ensuring concise and effective reasoning in perception tasks. We demonstrate the superiority of VACoT with extensive experiments on 13 perception benchmarks and further introduce AdvOCR to highlight the generalization benefits of post-hoc visual augmentations in adversarial scenarios.