Q-Mask: Query-driven Causal Masks for Text Anchoring in OCR-Oriented Vision-Language Models

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models struggle to stably and accurately anchor textual queries to their corresponding spatial regions in images, lacking fine-grained alignment capabilities. To address this limitation, this work proposes the Q-Mask framework, which introduces a Causal Query-driven Mask Decoder (CQMD) to enable sequential, query-conditioned mask generation. This approach decouples text localization from recognition and explicitly constructs text anchors prior to OCR processing. The study innovatively incorporates a visual chain-of-thought paradigm, establishing—for the first time in OCR-guided vision-language models—a causal, query-driven masking mechanism. Furthermore, the authors curate TextAnchor-26M, a large-scale dataset with fine-grained mask annotations, and demonstrate significant improvements in text anchoring accuracy and stability on the newly introduced TextAnchor-Bench benchmark, thereby enhancing the model’s capacity for text understanding and localization in complex scenes.
📝 Abstract
Optical Character Recognition (OCR) is increasingly regarded as a foundational capability for modern vision-language models (VLMs), enabling them not only to read text in images but also to support downstream reasoning in real-world visual question answering (VQA). However, practical applications further require reliable text anchors, i.e., accurately grounding queried text to its corresponding spatial region. To systematically evaluate this capability, we introduce TextAnchor-Bench (TABench), a benchmark for fine-grained text-region grounding, which reveals that both general-purpose and OCR-specific VLMs still struggle to establish accurate and stable text anchors. To address this limitation, we propose Q-Mask, a precise OCR framework built upon a causal query-driven mask decoder (CQMD). Inspired by chain-of-thought reasoning, Q-Mask performs causal visual decoding that sequentially generates query-conditioned visual masks before producing the final OCR output. This visual CoT paradigm disentangles where the text is from what the text is, enforcing grounded evidence acquisition prior to recognition and enabling explicit text anchor construction during inference. To train CQMD, we construct TextAnchor-26M, a large-scale dataset of image-text pairs annotated with fine-grained masks corresponding to specific textual elements, encouraging stable text-region correspondences and injecting strong spatial priors into VLM training. Extensive experiments demonstrate that Q-Mask substantially improves text anchoring and understanding across diverse visual scenes.
Problem

Research questions and friction points this paper is trying to address.

text anchoring
vision-language models
OCR
spatial grounding
visual question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Q-Mask
causal query-driven mask decoder
text anchoring
visual chain-of-thought
OCR-oriented VLMs
🔎 Similar Papers
No similar papers found.
L
Longwei Xu
MiLM Plus, Xiaomi Inc
Feng Feng
Feng Feng
Full Professor, Tianjin University
AI/ML-based Microwave Design AutomationSurrogate ModelingEM-based OptimizationDeep Neural
S
Shaojie Zhang
MiLM Plus, Xiaomi Inc
X
Xin Chen
MiLM Plus, Xiaomi Inc
H
Hang Li
MiLM Plus, Xiaomi Inc
A
Anan Du
MiLM Plus, Xiaomi Inc
H
Hailong Yu
MiLM Plus, Xiaomi Inc
P
Pei Fu
MiLM Plus, Xiaomi Inc
Zhenbo Luo
Zhenbo Luo
XiaoMi
Vision Language ModelComputer Vision
Jian Luan
Jian Luan
Toshiba, Microsoft, Xiaomi
LLMVLMTTSSinging Synthesis