From Memorization to Generalization: Fine-Tuning Large Language Models for Biomedical Term-to-Identifier Normalization

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the distinct mechanisms of generalization versus memorization in large language models (LLMs) for biomedical term normalization—i.e., mapping lexical terms to standardized ontology identifiers. Addressing heterogeneity across ontologies (e.g., Gene Ontology [GO], Human Phenotype Ontology [HPO], gene–protein mappings) in lexical structure and identifier popularity, we propose an interpretable fine-tuning framework leveraging Llama 3.1-8B and GPT-4o. We conduct fine-grained evaluation across multi-source ontology data and disentangle semantic generalization from rote memorization via embedding space analysis. Key findings: identifier popularity and term lexicalization strongly modulate fine-tuning efficacy; GO and gene–protein mappings achieve 77% accuracy and a 13.9% generalization gain, respectively, whereas low-lexicalization ontologies like HPO show limited improvement. Our work establishes theoretical foundations and practical guidelines for controllable LLM deployment in biomedical knowledge standardization.

Technology Category

Application Category

📝 Abstract
Effective biomedical data integration depends on automated term normalization, the mapping of natural language biomedical terms to standardized identifiers. This linking of terms to identifiers is essential for semantic interoperability. Large language models (LLMs) show promise for this task but perform unevenly across terminologies. We evaluated both memorization (training-term performance) and generalization (validation-term performance) across multiple biomedical ontologies. Fine-tuning Llama 3.1 8B revealed marked differences by terminology. GO mappings showed strong memorization gains (up to 77% improvement in term-to-identifier accuracy), whereas HPO showed minimal improvement. Generalization occurred only for protein-gene (GENE) mappings (13.9% gain), while fine-tuning for HPO and GO yielded negligible transfer. Baseline accuracy varied by model scale, with GPT-4o outperforming both Llama variants for all terminologies. Embedding analyses showed tight semantic alignment between gene symbols and protein names but weak alignment between terms and identifiers for GO or HPO, consistent with limited lexicalization. Fine-tuning success depended on two interacting factors: identifier popularity and lexicalization. Popular identifiers were more likely encountered during pretraining, enhancing memorization. Lexicalized identifiers, such as gene symbols, enabled semantic generalization. By contrast, arbitrary identifiers in GO and HPO constrained models to rote learning. These findings provide a predictive framework for when fine-tuning enhances factual recall versus when it fails due to sparse or non-lexicalized identifiers.
Problem

Research questions and friction points this paper is trying to address.

Mapping biomedical terms to standardized identifiers
Evaluating fine-tuning performance across biomedical ontologies
Identifying factors affecting generalization in term normalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning Llama 3.1 8B for biomedical normalization
Using identifier popularity and lexicalization as predictors
Enhancing generalization for protein-gene mappings via fine-tuning
🔎 Similar Papers
No similar papers found.
S
Suswitha Pericharla
Computer Science Department, Missouri State University, 901 S. National Avenue, Springfield MO, 65897, MO, USA.
D
Daniel B. Hier
Department of Neurology and Rehabilitation, University of Illinois at Chicago, 912 S. Wood Street, Chicago, 60612, IL, USA.
Tayo Obafemi-Ajayi
Tayo Obafemi-Ajayi
Missouri State University
Machine learningdata miningbioinformaticsintelligent systems