SearchRAG: Can Search Engines Be Helpful for LLM-based Medical Question Answering?

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of large language models (LLMs) in medical question answering—namely, outdated knowledge and insufficient clinical granularity—this paper proposes a retrieval-augmented generation (RAG) framework integrated with a real-time search engine. Innovatively, it introduces a general-purpose search engine into the medical RAG pipeline for the first time, jointly leveraging synthetic query generation and uncertainty modeling to guide knowledge selection: the former enhances retrieval coverage, while the latter dynamically filters low-confidence passages based on information gain, balancing relevance and clinical utility. The method combines search engine APIs, prompt engineering, and a lightweight uncertainty estimation module. Evaluated on multiple challenging medical QA benchmarks, it achieves significant accuracy improvements—especially on tasks requiring up-to-date clinical guidelines or fine-grained rare-disease information—establishing a scalable, low-dependency, domain-specialized RAG paradigm for dynamic knowledge injection.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown remarkable capabilities in general domains but often struggle with tasks requiring specialized knowledge. Conventional Retrieval-Augmented Generation (RAG) techniques typically retrieve external information from static knowledge bases, which can be outdated or incomplete, missing fine-grained clinical details essential for accurate medical question answering. In this work, we propose SearchRAG, a novel framework that overcomes these limitations by leveraging real-time search engines. Our method employs synthetic query generation to convert complex medical questions into search-engine-friendly queries and utilizes uncertainty-based knowledge selection to filter and incorporate the most relevant and informative medical knowledge into the LLM's input. Experimental results demonstrate that our method significantly improves response accuracy in medical question answering tasks, particularly for complex questions requiring detailed and up-to-date knowledge.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM accuracy in medical queries
Utilizing search engines for real-time knowledge
Improving specialized medical question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes real-time search engines
Employs synthetic query generation
Filters knowledge via uncertainty-based selection
🔎 Similar Papers
No similar papers found.