PDF-WuKong: A Large Multimodal Model for Efficient Long PDF Reading with End-to-End Sparse Sampling

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the multimodal understanding challenge posed by long academic PDFs—characterized by interleaved text and figures as well as multilingual content—this paper proposes an end-to-end sparse sampling framework. It dynamically selects question-relevant text passages and figures as input to multimodal large language models (MLLMs), employs a dual-encoder collaborative architecture, and introduces an evidence-source-aware automatic QA generation strategy. We further construct PaperPDF, the first large-scale multimodal QA dataset derived from academic PDFs, comprising 1.1 million samples. Our method achieves an average F1 score gain of 8.6% over state-of-the-art closed-source models on long-document multimodal QA benchmarks, while accelerating inference by 3.2×. All code and the PaperPDF dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Multimodal document understanding is a challenging task to process and comprehend large amounts of textual and visual information. Recent advances in Large Language Models (LLMs) have significantly improved the performance of this task. However, existing methods typically focus on either plain text or a limited number of document images, struggling to handle long PDF documents with interleaved text and images, especially for academic papers. In this paper, we introduce PDF-WuKong, a multimodal large language model (MLLM) which is designed to enhance multimodal question-answering (QA) for long PDF documents. PDF-WuKong incorporates a sparse sampler that operates on both text and image representations, significantly improving the efficiency and capability of the MLLM. The sparse sampler is integrated with the MLLM's image encoder and selects the paragraphs or diagrams most pertinent to user queries for processing by the language model. To effectively train and evaluate our model, we construct PaperPDF, a dataset consisting of a broad collection of English and Chinese academic papers. Multiple strategies are proposed to automatically generate 1.1 million QA pairs along with their corresponding evidence sources. Experimental results demonstrate the superiority and high efficiency of our approach over other models on the task of long multimodal document understanding, surpassing proprietary products by an average of 8.6% on F1. Our code and dataset will be released at https://github.com/yh-hust/PDF-Wukong.
Problem

Research questions and friction points this paper is trying to address.

Long PDF Processing
Academic Papers
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

PDF-WuKong
Long Document Processing
Efficient Information Extraction
🔎 Similar Papers
No similar papers found.
X
Xudong Xie
Huazhong University of Science and Technology
L
Liang Yin
Huazhong University of Science and Technology
H
Hao Yan
Huazhong University of Science and Technology
Y
Yang Liu
Huazhong University of Science and Technology
J
Jing Ding
Huazhong University of Science and Technology
M
Minghui Liao
Huawei Inc.
Y
Yuliang Liu
Huazhong University of Science and Technology
W
Wei Chen
Huazhong University of Science and Technology
Xiang Bai
Xiang Bai
Huazhong University of Science and Technology (HUST)
Computer VisionOCR