🤖 AI Summary
Existing vision-language models struggle to stably and accurately anchor textual queries to their corresponding spatial regions in images, lacking fine-grained alignment capabilities. To address this limitation, this work proposes the Q-Mask framework, which introduces a Causal Query-driven Mask Decoder (CQMD) to enable sequential, query-conditioned mask generation. This approach decouples text localization from recognition and explicitly constructs text anchors prior to OCR processing. The study innovatively incorporates a visual chain-of-thought paradigm, establishing—for the first time in OCR-guided vision-language models—a causal, query-driven masking mechanism. Furthermore, the authors curate TextAnchor-26M, a large-scale dataset with fine-grained mask annotations, and demonstrate significant improvements in text anchoring accuracy and stability on the newly introduced TextAnchor-Bench benchmark, thereby enhancing the model’s capacity for text understanding and localization in complex scenes.
📝 Abstract
Optical Character Recognition (OCR) is increasingly regarded as a foundational capability for modern vision-language models (VLMs), enabling them not only to read text in images but also to support downstream reasoning in real-world visual question answering (VQA). However, practical applications further require reliable text anchors, i.e., accurately grounding queried text to its corresponding spatial region. To systematically evaluate this capability, we introduce TextAnchor-Bench (TABench), a benchmark for fine-grained text-region grounding, which reveals that both general-purpose and OCR-specific VLMs still struggle to establish accurate and stable text anchors. To address this limitation, we propose Q-Mask, a precise OCR framework built upon a causal query-driven mask decoder (CQMD). Inspired by chain-of-thought reasoning, Q-Mask performs causal visual decoding that sequentially generates query-conditioned visual masks before producing the final OCR output. This visual CoT paradigm disentangles where the text is from what the text is, enforcing grounded evidence acquisition prior to recognition and enabling explicit text anchor construction during inference. To train CQMD, we construct TextAnchor-26M, a large-scale dataset of image-text pairs annotated with fine-grained masks corresponding to specific textual elements, encouraging stable text-region correspondences and injecting strong spatial priors into VLM training. Extensive experiments demonstrate that Q-Mask substantially improves text anchoring and understanding across diverse visual scenes.