Marten: Visual Question Answering with Mask Generation for Multi-modal Document Understanding

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient visual-textual semantic and spatial alignment in multimodal document understanding, this paper proposes VQAMask—a novel pretraining task that jointly models vision-question-answering–style text parsing and spatially aware mask generation. Methodologically, we introduce a dual-objective co-optimization paradigm and a lightweight, training-specific mask generator that explicitly encodes pixel-level spatial constraints, effectively mitigating visual-textual hallucination in multimodal large language models (MLLMs). Evaluated on our newly constructed large-scale document dataset MTMask6M (6 million samples), our 8B-parameter model achieves significant gains over existing MLLMs of comparable scale, delivering an average 4.2% improvement across document-level visual question answering and information extraction tasks. All code, models, and the MTMask6M dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Multi-modal Large Language Models (MLLMs) have introduced a novel dimension to document understanding, i.e., they endow large language models with visual comprehension capabilities; however, how to design a suitable image-text pre-training task for bridging the visual and language modality in document-level MLLMs remains underexplored. In this study, we introduce a novel visual-language alignment method that casts the key issue as a Visual Question Answering with Mask generation (VQAMask) task, optimizing two tasks simultaneously: VQA-based text parsing and mask generation. The former allows the model to implicitly align images and text at the semantic level. The latter introduces an additional mask generator (discarded during inference) to explicitly ensure alignment between visual texts within images and their corresponding image regions at a spatially-aware level. Together, they can prevent model hallucinations when parsing visual text and effectively promote spatially-aware feature representation learning. To support the proposed VQAMask task, we construct a comprehensive image-mask generation pipeline and provide a large-scale dataset with 6M data (MTMask6M). Subsequently, we demonstrate that introducing the proposed mask generation task yields competitive document-level understanding performance. Leveraging the proposed VQAMask, we introduce Marten, a training-efficient MLLM tailored for document-level understanding. Extensive experiments show that our Marten consistently achieves significant improvements among 8B-MLLMs in document-centric tasks. Code and datasets are available at https://github.com/PriNing/Marten.
Problem

Research questions and friction points this paper is trying to address.

Bridging visual and language modalities in document-level MLLMs.
Preventing model hallucinations in visual text parsing.
Enhancing spatially-aware feature representation learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Question Answering with Mask generation (VQAMask)
Simultaneous optimization of VQA and mask generation
Training-efficient MLLM for document-level understanding
🔎 Similar Papers
No similar papers found.
Zining Wang
Zining Wang
Beihang University
T
Tongkun Guan
Beijing Institute of Technology
P
Pei Fu
Meituan, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
C
Chen Duan
Meituan, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
Q
Qianyi Jiang
Meituan, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
Zhentao Guo
Zhentao Guo
Beijing Institute of Technology
Point cloud registration
S
Shan Guo
Meituan, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
J
Junfeng Luo
Meituan, MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
W
Wei Shen
MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
X
Xiaokang Yang
Beijing Institute of Technology