🤖 AI Summary
To address insufficient visual-textual semantic and spatial alignment in multimodal document understanding, this paper proposes VQAMask—a novel pretraining task that jointly models vision-question-answering–style text parsing and spatially aware mask generation. Methodologically, we introduce a dual-objective co-optimization paradigm and a lightweight, training-specific mask generator that explicitly encodes pixel-level spatial constraints, effectively mitigating visual-textual hallucination in multimodal large language models (MLLMs). Evaluated on our newly constructed large-scale document dataset MTMask6M (6 million samples), our 8B-parameter model achieves significant gains over existing MLLMs of comparable scale, delivering an average 4.2% improvement across document-level visual question answering and information extraction tasks. All code, models, and the MTMask6M dataset are publicly released.
📝 Abstract
Multi-modal Large Language Models (MLLMs) have introduced a novel dimension to document understanding, i.e., they endow large language models with visual comprehension capabilities; however, how to design a suitable image-text pre-training task for bridging the visual and language modality in document-level MLLMs remains underexplored. In this study, we introduce a novel visual-language alignment method that casts the key issue as a Visual Question Answering with Mask generation (VQAMask) task, optimizing two tasks simultaneously: VQA-based text parsing and mask generation. The former allows the model to implicitly align images and text at the semantic level. The latter introduces an additional mask generator (discarded during inference) to explicitly ensure alignment between visual texts within images and their corresponding image regions at a spatially-aware level. Together, they can prevent model hallucinations when parsing visual text and effectively promote spatially-aware feature representation learning. To support the proposed VQAMask task, we construct a comprehensive image-mask generation pipeline and provide a large-scale dataset with 6M data (MTMask6M). Subsequently, we demonstrate that introducing the proposed mask generation task yields competitive document-level understanding performance. Leveraging the proposed VQAMask, we introduce Marten, a training-efficient MLLM tailored for document-level understanding. Extensive experiments show that our Marten consistently achieves significant improvements among 8B-MLLMs in document-centric tasks. Code and datasets are available at https://github.com/PriNing/Marten.