🤖 AI Summary
This paper addresses the challenges of knowledge extraction and retrieval from multimodal heterogeneous documents—including text, tables, images, emails, and audio/video—by proposing a scalable, open-source RAG and information extraction framework. Methodologically, it introduces a modular, distributed architecture enabling CPU/GPU co-execution and parallel processing; integrates multimodal data transformation, unified semantic representation learning, and hybrid dense-sparse retrieval; and incorporates Docling-based optimization with GPU-accelerated computation. Key contributions include: (1) native support for 15+ document formats; (2) provision of both interactive APIs and batch-mode RAG services; (3) a 40% improvement in parsing accuracy for scanned PDFs and a 3.8× speedup over single-node baselines in benchmark evaluations; and (4) significant gains in medical question answering accuracy on PubMedQA as retrieval depth increases—demonstrating its effectiveness and practicality for open-domain multimodal RAG.
📝 Abstract
We introduce MMORE, an open-source pipeline for Massive Multimodal Open RetrievalAugmented Generation and Extraction, designed to ingest, transform, and retrieve knowledge from heterogeneous document formats at scale. MMORE supports more than fifteen file types, including text, tables, images, emails, audio, and video, and processes them into a unified format to enable downstream applications for LLMs. The architecture offers modular, distributed processing, enabling scalable parallelization across CPUs and GPUs. On processing benchmarks, MMORE demonstrates a 3.8-fold speedup over single-node baselines and 40% higher accuracy than Docling on scanned PDFs. The pipeline integrates hybrid dense-sparse retrieval and supports both interactive APIs and batch RAG endpoints. Evaluated on PubMedQA, MMORE-augmented medical LLMs improve biomedical QA accuracy with increasing retrieval depth. MMORE provides a robust, extensible foundation for deploying task-agnostic RAG systems on diverse, real-world multimodal data. The codebase is available at https://github.com/swiss-ai/mmore.