Data-Efficient Biomedical In-Context Learning: A Diversity-Enhanced Submodular Perspective

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In biomedical in-context learning (ICL), conventional example selection overly emphasizes representativeness while neglecting diversity, limiting generalization. To address this, we propose Dual-Div—a two-stage retrieval-and-ranking framework that, for the first time, integrates diversity-enhanced submodular optimization into the initial retrieval phase to jointly optimize representativeness and diversity. Empirical analysis reveals that diversity critically governs few-shot performance: optimal efficiency is achieved with only 3–5 examples. Evaluated across three biomedical retrievers (BGE-Large, BMRetriever, MedCPT) and LLM backbones (LLaMA 3.1, Qwen 2.5), Dual-Div consistently outperforms baselines on NER, RE, and TC tasks—achieving up to a 5% macro-F1 gain—and demonstrates strong robustness to prompt-order perturbations and class imbalance.

Technology Category

Application Category

📝 Abstract
Recent progress in large language models (LLMs) has leveraged their in-context learning (ICL) abilities to enable quick adaptation to unseen biomedical NLP tasks. By incorporating only a few input-output examples into prompts, LLMs can rapidly perform these new tasks. While the impact of these demonstrations on LLM performance has been extensively studied, most existing approaches prioritize representativeness over diversity when selecting examples from large corpora. To address this gap, we propose Dual-Div, a diversity-enhanced data-efficient framework for demonstration selection in biomedical ICL. Dual-Div employs a two-stage retrieval and ranking process: First, it identifies a limited set of candidate examples from a corpus by optimizing both representativeness and diversity (with optional annotation for unlabeled data). Second, it ranks these candidates against test queries to select the most relevant and non-redundant demonstrations. Evaluated on three biomedical NLP tasks (named entity recognition (NER), relation extraction (RE), and text classification (TC)) using LLaMA 3.1 and Qwen 2.5 for inference, along with three retrievers (BGE-Large, BMRetriever, MedCPT), Dual-Div consistently outperforms baselines-achieving up to 5% higher macro-F1 scores-while demonstrating robustness to prompt permutations and class imbalance. Our findings establish that diversity in initial retrieval is more critical than ranking-stage optimization, and limiting demonstrations to 3-5 examples maximizes performance efficiency.
Problem

Research questions and friction points this paper is trying to address.

Enhancing diversity in biomedical in-context learning example selection
Optimizing representativeness and diversity for biomedical NLP tasks
Improving performance efficiency with limited demonstration examples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-Div framework enhances diversity in biomedical ICL
Two-stage retrieval optimizes representativeness and diversity
Limiting demonstrations to 3-5 examples maximizes efficiency
🔎 Similar Papers
No similar papers found.
J
Jun Wang
Division of Computational Health Sciences, Department of Surgery, University of Minnesota, 516 Delaware St SE, Minneapolis, 55455, MN, USA
Zaifu Zhan
Zaifu Zhan
PhD at University of Minnesota, MS at Tsinghua University
Natural language processingMachine LearningAI for BiomedicineLarge Language model
Q
Qixin Zhang
College of Computing and Data Science, Nanyang Technological University, 50 Nanyang Avenue, 639798, Singapore
Mingquan Lin
Mingquan Lin
Assistant Professor at University of Minnesota
Medical image analysisDeep learning
Meijia Song
Meijia Song
University of Minnesota
Nursing InformaticsHealth Informatics
R
Rui Zhang
Division of Computational Health Sciences, Department of Surgery, University of Minnesota, 516 Delaware St SE, Minneapolis, 55455, MN, USA