BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the commercial deployment challenges of multimodal AI in document understanding and code generation—stemming from limited training data and restrictive licensing—this paper introduces BigDocs: the first high-quality, traceable, and license-compliant open multimodal dataset for documents and code (7.5 million samples across 30 task categories) and its associated benchmark, BigDocs-Bench (featuring 10 real-world tasks, e.g., Screenshot2HTML and Image2LaTeX). We propose novel evaluation paradigms, including GUI-aware and image-driven code generation. Our data curation pipeline integrates automated content analysis, license-compliance filtering, structured metadata tracing, and human verification. Models trained on BigDocs achieve an average 25.8% performance gain over GPT-4o across multiple tasks, with human evaluations strongly favoring their outputs.

Technology Category

Application Category

📝 Abstract
Multimodal AI has the potential to significantly enhance document-understanding tasks, such as processing receipts, understanding workflows, extracting data from documents, and summarizing reports. Code generation tasks that require long-structured outputs can also be enhanced by multimodality. Despite this, their use in commercial applications is often limited due to limited access to training data and restrictive licensing, which hinders open access. To address these limitations, we introduce BigDocs-7.5M, a high-quality, open-access dataset comprising 7.5 million multimodal documents across 30 tasks. We use an efficient data curation process to ensure our data is high-quality and license-permissive. Our process emphasizes accountability, responsibility, and transparency through filtering rules, traceable metadata, and careful content analysis. Additionally, we introduce BigDocs-Bench, a benchmark suite with 10 novel tasks where we create datasets that reflect real-world use cases involving reasoning over Graphical User Interfaces (GUI) and code generation from images. Our experiments show that training with BigDocs-Bench improves average performance up to 25.8% over closed-source GPT-4o in document reasoning and structured output tasks such as Screenshot2HTML or Image2Latex generation. Finally, human evaluations showed a preference for outputs from models trained on BigDocs over GPT-4o. This suggests that BigDocs can help both academics and the open-source community utilize and improve AI tools to enhance multimodal capabilities and document reasoning. The project is hosted at https://bigdocs.github.io .
Problem

Research questions and friction points this paper is trying to address.

Limited access to multimodal training data hinders AI advancements.
BigDocs-7.5M provides open-access, high-quality multimodal document dataset.
BigDocs-Bench improves AI performance in document and code tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

BigDocs-7.5M: open-access multimodal dataset
BigDocs-Bench: benchmark for real-world tasks
Efficient data curation ensures high-quality content
🔎 Similar Papers
No similar papers found.
J
Juan Rodriguez
ServiceNow
Xiangru Jian
Xiangru Jian
University of Waterloo
MultimodalityLLMGNNDatabase
Siba Smarak Panigrahi
Siba Smarak Panigrahi
EPFL | McGill University and Mila | IIT Kharagpur
Vision Language ModelsGenerative ModelsGeometric Deep LearningAI for Science
T
Tianyu Zhang
ServiceNow, Mila, Université de Montréal
Aarash Feizi
Aarash Feizi
PhD student in Computer Science, McGill University
Representation LearningSelf-Supervised LearningGraph Representation Learning
Abhay Puri
Abhay Puri
Applied Research Scientist, ServiceNow Research
Agent SecurityLarge Language ModelsComputer VisionMultiModal Foundational Models
A
Akshay Kalkunte
ServiceNow
F
François Savard
ServiceNow
Ahmed Masry
Ahmed Masry
Graduate Student, York University
Natural Language Processing
Shravan Nayak
Shravan Nayak
Mila
Vision and LanguageCultureGeo-diversityMultilinguality
Rabiul Awal
Rabiul Awal
Mila, Montreal
deep learningagi
M
Mahsa Massoud
ServiceNow, McGill University
Amirhossein Abaskohi
Amirhossein Abaskohi
Computer Science PhD Student @ UBC
Natural Language ProcessingComputer LinguisticsMultimodal ReasoningHuman Centered AI
Z
Zichao Li
ServiceNow, Mila, McGill University
Suyuchen Wang
Suyuchen Wang
Université de Montréal / Mila
NLPLLMVLMDeep Learning
Pierre-André Noël
Pierre-André Noël
ServiceNow Research
Machine learninggraphsstochastic processes
M
Mats Leon Richter
ServiceNow
S
Saverio Vadacchino
ServiceNow
S
Shubham Agarwal
ServiceNow
Sanket Biswas
Sanket Biswas
Ph.D Candidate at Computer Vision Center, Universitat Autònoma de Barcelona
Computer VisionDocument UnderstandingVision and LanguageMachine LearningPattern Recognition
S
Sara Shanian
ServiceNow
Y
Ying Zhang
ServiceNow
N
Noah Bolger
ServiceNow
K
Kurt MacDonald
ServiceNow
S
Simon Fauvel
ServiceNow
S
Sathwik Tejaswi
ServiceNow
Srinivas Sunkara
Srinivas Sunkara
Google Deepmind
J
João Monteiro
ServiceNow
K
K. Dvijotham
ServiceNow
T
Torsten Scholak
ServiceNow
Nicolas Chapados
Nicolas Chapados
ServiceNow Research, Mila, Polytechnique Montréal (adjunct)
Deep LearningArtificial IntelligenceStatisticsForecasting
S
Sepideh Kharaghani
ServiceNow
S
Sean Hughes
ServiceNow
M
M. Özsu
University of Waterloo
Siva Reddy
Siva Reddy
McGill University, Mila Quebec AI Institute
Natural Language ProcessingComputational LinguisticsDeep LearningSemantics
M
Marco Pedersoli
ServiceNow, École de Technologie Supérieure
Yoshua Bengio
Yoshua Bengio
Professor of computer science, University of Montreal, Mila, IVADO, CIFAR
Machine learningdeep learningartificial intelligence
C
Christopher Pal
ServiceNow, Mila, Polytechnique Montréal
I
Issam Laradji
ServiceNow, University of British Columbia
Spandana Gella
Spandana Gella
ServiceNow AI Research
Multimodal Foundational ModelsGUI AgentsSafety & Security
P
Perouz Taslakian
ServiceNow
D
David Vazquez
ServiceNow
Sai Rajeswar
Sai Rajeswar
Staff Research Scientist, Adjunct Professor, Mila, ServiceNow
machine learninggenerative modelsreinforcement learning