Zero-Indexing Internet Search Augmented Generation for Large Language Models

📅 2024-11-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) lack real-time web awareness, limiting their ability to satisfy time-sensitive information needs. To address this, we propose a zero-index, end-to-end collaborative search-augmented generation framework. First, a parsing LLM dynamically identifies retrieval intent and extracts keywords; then, a general-purpose search API retrieves up-to-date web pages. A hybrid re-ranking module—integrating semantic relevance and domain authority—mitigates search engine bias. Finally, an extraction LLM structurally parses raw HTML content into task-ready representations. The framework requires no local indexing or model fine-tuning, enabling rapid deployment and low-latency adaptation to evolving information. Empirical evaluation demonstrates substantial improvements in both temporal freshness and factual accuracy of generated outputs. Deployed at scale in 01.AI’s production environment, it robustly supports high-concurrency generative inference workloads with sustained reliability.

Technology Category

Application Category

📝 Abstract
Retrieval augmented generation has emerged as an effective method to enhance large language model performance. This approach typically relies on an internal retrieval module that uses various indexing mechanisms to manage a static pre-processed corpus. However, such a paradigm often falls short when it is necessary to integrate the most up-to-date information that has not been updated into the corpus during generative inference time. In this paper, we explore an alternative approach that leverages standard search engine APIs to dynamically integrate the latest online information (without maintaining any index for any fixed corpus), thereby improving the quality of generated content. We design a collaborative LLM-based paradigm, where we include: (i) a parser-LLM that determines if the Internet augmented generation is demanded and extracts the search keywords if so with a single inference; (ii) a mixed ranking strategy that re-ranks the retrieved HTML files to eliminate bias introduced from the search engine API; and (iii) an extractor-LLM that can accurately and efficiently extract relevant information from the fresh content in each HTML file. We conduct extensive empirical studies to evaluate the performance of this Internet search augmented generation paradigm. The experimental results demonstrate that our method generates content with significantly improved quality. Our system has been successfully deployed in a production environment to serve 01.AI's generative inference requests.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Real-time Information
Generation Limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-Augmented Generation
Real-time Information Update
Bias Mitigation Algorithm
🔎 Similar Papers
No similar papers found.
G
Guangxin He
01.AI
Z
Zonghong Dai
Fudan University
J
Jiangcheng Zhu
01.AI
B
Binqiang Zhao
01.AI
Chenyue Li
Chenyue Li
Hong Kong University of Science and Technology
AI for ScienceLarge Language Model
You Peng
You Peng
Dow Inc
Machine VisionTime Series ForecastingBayesian OptimizationSensor Fusion
C
Chen Wang
Tsinghua University
B
Binhang Yuan
HKUST