OIDA-QA: A Multimodal Benchmark for Analyzing the Opioid Industry Documents Archive

πŸ“… 2025-11-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenges of industrial document analysis and weak multimodal understanding in the opioid crisis, this paper introduces OIDA-MMBenchβ€”the first multimodal benchmark tailored to the UCSF-JHU Opioid Industry Document Archive (OIDA), integrating text, images, and layout information for fine-grained medical-legal document understanding and question answering. We propose a historical Q&A-guided context anchoring mechanism and an importance-aware page classifier, significantly improving answer relevance and localization accuracy. High-quality training and test data are automatically generated using multimodal large language models, yielding an open-source dataset comprising 400,000 documents and 370,000 Q&A pairs. Experiments demonstrate substantial performance gains over unimodal baselines on document information extraction and complex multimodal QA tasks, establishing a reproducible, scalable technical paradigm for systematic evidence mining in public health crises.

Technology Category

Application Category

πŸ“ Abstract
The opioid crisis represents a significant moment in public health that reveals systemic shortcomings across regulatory systems, healthcare practices, corporate governance, and public policy. Analyzing how these interconnected systems simultaneously failed to protect public health requires innovative analytic approaches for exploring the vast amounts of data and documents disclosed in the UCSF-JHU Opioid Industry Documents Archive (OIDA). The complexity, multimodal nature, and specialized characteristics of these healthcare-related legal and corporate documents necessitate more advanced methods and models tailored to specific data types and detailed annotations, ensuring the precision and professionalism in the analysis. In this paper, we tackle this challenge by organizing the original dataset according to document attributes and constructing a benchmark with 400k training documents and 10k for testing. From each document, we extract rich multimodal information-including textual content, visual elements, and layout structures-to capture a comprehensive range of features. Using multiple AI models, we then generate a large-scale dataset comprising 360k training QA pairs and 10k testing QA pairs. Building on this foundation, we develop domain-specific multimodal Large Language Models (LLMs) and explore the impact of multimodal inputs on task performance. To further enhance response accuracy, we incorporate historical QA pairs as contextual grounding for answering current queries. Additionally, we incorporate page references within the answers and introduce an importance-based page classifier, further improving the precision and relevance of the information provided. Preliminary results indicate the improvements with our AI assistant in document information extraction and question-answering tasks. The dataset is available at: https://huggingface.co/datasets/opioidarchive/oida-qa
Problem

Research questions and friction points this paper is trying to address.

Analyzing systemic failures in opioid crisis through vast document archives
Developing multimodal AI models for healthcare legal document analysis
Improving question-answering precision on complex corporate and regulatory documents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructed multimodal benchmark with document attributes
Developed domain-specific multimodal LLMs for analysis
Incorporated contextual grounding and page references
πŸ”Ž Similar Papers
No similar papers found.
Xuan Shen
Xuan Shen
Cornell Tech, Northeastern University
Efficient Deep LearningML SystemsAutoML
B
Brian Wingenroth
Johns Hopkins University
Zichao Wang
Zichao Wang
Adobe Research
document AIAI for educationnatural language processingmachine learning
Jason Kuen
Jason Kuen
Adobe Research
Deep LearningComputer Vision
Wanrong Zhu
Wanrong Zhu
Adobe Research
Vision and LanguageNatural Language Processing
R
Ruiyi Zhang
Adobe Research
Y
Yiwei Wang
University of California, Merced
L
Lichun Ma
National Institutes of Health
Anqi Liu
Anqi Liu
Tulane University
Human GeneticsComputational BiologyBioinformaticsDeep Learning
H
Hongfu Liu
Brandeis University
T
Tong Sun
Adobe Research
K
Kevin S. Hawkins
Johns Hopkins University
K
Kate Tasker
University of California, San Francisco
G
G. C. Alexander
Johns Hopkins University
Jiuxiang Gu
Jiuxiang Gu
Adobe Research
Computer VisionNatural Language ProcessingMachine Learning