🤖 AI Summary
The explosive growth of scientific literature impedes efficient knowledge discovery, exacerbates redundant research, and hinders cross-disciplinary collaboration. To address these challenges, we propose a high-performance Retrieval-Augmented Generation (RAG) system scalable to tens of millions of documents. We introduce the first High-Performance Computing (HPC)-driven large-scale RAG paradigm, integrating Polaris, Sunspot, and Frontier supercomputing resources with distributed vector retrieval. We further propose Oreo, a multimodal document parsing model that significantly improves structural accuracy for complex scientific documents containing mathematical formulas and figures. Additionally, we design ColTrast, a query-aware contrastive encoding algorithm enabling late-interaction semantic alignment and joint optimization of retrieval precision. Our system achieves 90% accuracy on SciQ and 76% on PubMedQA—substantially outperforming PubMedGPT and GPT-4. It scales to thousands of GPUs and delivers millisecond-latency RAG inference over million-document corpora.
📝 Abstract
The volume of scientific literature is growing exponentially, leading to underutilized discoveries, duplicated efforts, and limited cross-disciplinary collaboration. Retrieval Augmented Generation (RAG) offers a way to assist scientists by improving the factuality of Large Language Models (LLMs) in processing this influx of information. However, scaling RAG to handle millions of articles introduces significant challenges, including the high computational costs associated with parsing documents and embedding scientific knowledge, as well as the algorithmic complexity of aligning these representations with the nuanced semantics of scientific content. To address these issues, we introduce HiPerRAG, a RAG workflow powered by high performance computing (HPC) to index and retrieve knowledge from more than 3.6 million scientific articles. At its core are Oreo, a high-throughput model for multimodal document parsing, and ColTrast, a query-aware encoder fine-tuning algorithm that enhances retrieval accuracy by using contrastive learning and late-interaction techniques. HiPerRAG delivers robust performance on existing scientific question answering benchmarks and two new benchmarks introduced in this work, achieving 90% accuracy on SciQ and 76% on PubMedQA-outperforming both domain-specific models like PubMedGPT and commercial LLMs such as GPT-4. Scaling to thousands of GPUs on the Polaris, Sunspot, and Frontier supercomputers, HiPerRAG delivers million document-scale RAG workflows for unifying scientific knowledge and fostering interdisciplinary innovation.