🤖 AI Summary
To address the commercial deployment challenges of multimodal AI in document understanding and code generation—stemming from limited training data and restrictive licensing—this paper introduces BigDocs: the first high-quality, traceable, and license-compliant open multimodal dataset for documents and code (7.5 million samples across 30 task categories) and its associated benchmark, BigDocs-Bench (featuring 10 real-world tasks, e.g., Screenshot2HTML and Image2LaTeX). We propose novel evaluation paradigms, including GUI-aware and image-driven code generation. Our data curation pipeline integrates automated content analysis, license-compliance filtering, structured metadata tracing, and human verification. Models trained on BigDocs achieve an average 25.8% performance gain over GPT-4o across multiple tasks, with human evaluations strongly favoring their outputs.
📝 Abstract
Multimodal AI has the potential to significantly enhance document-understanding tasks, such as processing receipts, understanding workflows, extracting data from documents, and summarizing reports. Code generation tasks that require long-structured outputs can also be enhanced by multimodality. Despite this, their use in commercial applications is often limited due to limited access to training data and restrictive licensing, which hinders open access. To address these limitations, we introduce BigDocs-7.5M, a high-quality, open-access dataset comprising 7.5 million multimodal documents across 30 tasks. We use an efficient data curation process to ensure our data is high-quality and license-permissive. Our process emphasizes accountability, responsibility, and transparency through filtering rules, traceable metadata, and careful content analysis. Additionally, we introduce BigDocs-Bench, a benchmark suite with 10 novel tasks where we create datasets that reflect real-world use cases involving reasoning over Graphical User Interfaces (GUI) and code generation from images. Our experiments show that training with BigDocs-Bench improves average performance up to 25.8% over closed-source GPT-4o in document reasoning and structured output tasks such as Screenshot2HTML or Image2Latex generation. Finally, human evaluations showed a preference for outputs from models trained on BigDocs over GPT-4o. This suggests that BigDocs can help both academics and the open-source community utilize and improve AI tools to enhance multimodal capabilities and document reasoning. The project is hosted at https://bigdocs.github.io .