When Less Is Not More: Large Language Models Normalize Less-Frequent Terms with Lower Accuracy

📅 2024-09-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are increasingly applied to biomedical ontology term standardization—e.g., mapping clinical phenotype terms to standardized ontology identifiers—but their reliability remains poorly characterized. Method: We systematically evaluate GPT-4o’s zero-shot mapping performance on HPO, GO, and UniProtKB terms using 268,776 real-world clinical phenotype annotations spanning 12,655 diseases, yielding 11,225 unique terms. We employ SHAP and permutation importance analyses to identify drivers of mapping errors. Contribution/Results: GPT-4o achieves only 13.1% overall accuracy; error rates rise significantly with decreasing term frequency and increasing term length. Crucially, term frequency is the strongest predictor of mapping failure—revealing a previously undocumented “frequency bias” in LLMs. This is the first study to systematically identify, quantify, and attribute such bias in ontology mapping. We advocate for balanced inclusion of high- and low-frequency terms in training and evaluation datasets to ensure robustness in precision medicine applications.

Technology Category

Application Category

📝 Abstract
Term normalization is the process of mapping a term from free text to a standardized concept and its machine-readable code in an ontology. Accurate normalization of terms that capture phenotypic differences between patients and diseases is critical to the success of precision medicine initiatives. A large language model (LLM), such as GPT-4o, can normalize terms to the Human Phenotype Ontology (HPO), but it may retrieve incorrect HPO IDs. Reported accuracy rates for LLMs on these tasks may be inflated due to imbalanced test datasets skewed towards high-frequency terms. In our study, using a comprehensive dataset of 268,776 phenotype annotations for 12,655 diseases from the HPO, GPT-4o achieved an accuracy of 13.1% in normalizing 11,225 unique terms. However, the accuracy was unevenly distributed, with higher-frequency and shorter terms normalized more accurately than lower-frequency and longer terms. Feature importance analysis, using SHAP and permutation methods, identified low-term frequency as the most significant predictor of normalization errors. These findings suggest that training and evaluation datasets for LLM-based term normalization should balance low- and high-frequency terms to improve model performance, particularly for infrequent terms critical to precision medicine.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' accuracy in mapping biomedical ontology terms to IDs
Assessing impact of ontology ID prevalence on mapping performance
Analyzing lexicalization effect on protein-to-gene symbol mapping accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs map biomedical terms to ontology IDs
Ontology ID prevalence affects mapping accuracy
GPT-4 excels in mapping lexicalized gene symbols
🔎 Similar Papers
No similar papers found.
D
D. B. Hier
Kummer Institute, Missouri University of Science and Technology
T
Thanh Son Do
Department of Computer Science, Missouri State University
Tayo Obafemi-Ajayi
Tayo Obafemi-Ajayi
Missouri State University
Machine learningdata miningbioinformaticsintelligent systems