Benchmarking Retrieval-Augmented Large Language Models in Biomedical NLP: Application, Robustness, and Self-Awareness

📅 2024-05-13
🏛️ arXiv.org
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
Retrieval-augmented large language models (RALs) lack systematic evaluation in biomedical NLP, hindering assessment of their reliability and trustworthiness. Method: We introduce the first comprehensive RAL benchmark for biomedicine, comprising four core capabilities—unlabeled-knowledge robustness, counterfactual robustness, diverse-input robustness, and negative-knowledge awareness—supported by four dedicated testbeds. We conduct cross-model, cross-task, multi-dimensional evaluation across nine biomedical datasets and five NLP tasks, integrating three mainstream LLMs and three retrieval architectures, enhanced by an adaptive retrieval-augmentation mechanism and novel meta-evaluation metrics. Contribution/Results: Experiments reveal pervasive failures of RALs under counterfactual and negative-knowledge scenarios, exposing critical trust deficiencies. To bridge this gap, we publicly release the benchmark datasets, evaluation toolkit, and fully reproducible results—establishing a foundational resource to advance trustworthy biomedical RAL research.

Technology Category

Application Category

📝 Abstract
Large language models (LLM) have demonstrated remarkable capabilities in various biomedical natural language processing (NLP) tasks, leveraging the demonstration within the input context to adapt to new tasks. However, LLM is sensitive to the selection of demonstrations. To address the hallucination issue inherent in LLM, retrieval-augmented LLM (RAL) offers a solution by retrieving pertinent information from an established database. Nonetheless, existing research work lacks rigorous evaluation of the impact of retrieval-augmented large language models on different biomedical NLP tasks. This deficiency makes it challenging to ascertain the capabilities of RAL within the biomedical domain. Moreover, the outputs from RAL are affected by retrieving the unlabeled, counterfactual, or diverse knowledge that is not well studied in the biomedical domain. However, such knowledge is common in the real world. Finally, exploring the self-awareness ability is also crucial for the RAL system. So, in this paper, we systematically investigate the impact of RALs on 5 different biomedical tasks (triple extraction, link prediction, classification, question answering, and natural language inference). We analyze the performance of RALs in four fundamental abilities, including unlabeled robustness, counterfactual robustness, diverse robustness, and negative awareness. To this end, we proposed an evaluation framework to assess the RALs'performance on different biomedical NLP tasks and establish four different testbeds based on the aforementioned fundamental abilities. Then, we evaluate 3 representative LLMs with 3 different retrievers on 5 tasks over 9 datasets.
Problem

Research questions and friction points this paper is trying to address.

Evaluating retrieval-augmented LLMs' performance across biomedical NLP tasks
Assessing robustness against unlabeled, counterfactual, and diverse knowledge
Investigating self-awareness capabilities in biomedical retrieval-augmented systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-augmented LLMs retrieve pertinent biomedical database information
Systematic evaluation framework assesses robustness across biomedical tasks
Tests models on unlabeled, counterfactual and diverse knowledge scenarios
🔎 Similar Papers
No similar papers found.
M
Mingchen Li
Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN, USA
Zaifu Zhan
Zaifu Zhan
PhD at University of Minnesota, MS at Tsinghua University
Natural language processingMachine LearningAI for BiomedicineLarge Language model
H
Han Yang
Institute for Health Informatics, University of Minnesota, Minneapolis, MN, USA
Yongkang Xiao
Yongkang Xiao
PhD student in University of Minnesota
Large Language ModelsKnowledge GraphsNLPHealth Informatics
J
Jiatan Huang
Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN, USA
R
Rui Zhang
Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN, USA