MetaBench: A Multi-task Benchmark for Assessing LLMs in Metabolomics

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Metabolomics poses significant challenges for large language models (LLMs) due to pathway complexity, identifier heterogeneity (e.g., HMDB↔ChEBI), and fragmented data—yet no domain-specific benchmark exists to rigorously evaluate LLM capabilities in this field. To address this gap, we introduce MetaBench, the first comprehensive, multi-task evaluation benchmark for metabolomics, assessing five core competencies: knowledge mastery, semantic understanding, factual grounding, logical reasoning, and scientific application. MetaBench leverages authoritative resources (e.g., HMDB, ChEBI, KEGG) to construct a high-quality, expert-validated dataset and integrates retrieval-augmented evaluation protocols. We systematically assess 25 open- and closed-weight LLMs. Results reveal robust performance in general text generation but substantial limitations in cross-database identifier mapping and reasoning over long-tail metabolites—those with low occurrence frequency and sparse annotations. MetaBench establishes the first rigorous, task-diverse standard for evaluating LLMs in metabolomics, providing empirical foundations and methodological guidance for developing domain-optimized AI tools.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities on general text; however, their proficiency in specialized scientific domains that require deep, interconnected knowledge remains largely uncharacterized. Metabolomics presents unique challenges with its complex biochemical pathways, heterogeneous identifier systems, and fragmented databases. To systematically evaluate LLM capabilities in this domain, we introduce MetaBench, the first benchmark for metabolomics assessment. Curated from authoritative public resources, MetaBench evaluates five capabilities essential for metabolomics research: knowledge, understanding, grounding, reasoning, and research. Our evaluation of 25 open- and closed-source LLMs reveals distinct performance patterns across metabolomics tasks: while models perform well on text generation tasks, cross-database identifier grounding remains challenging even with retrieval augmentation. Model performance also decreases on long-tail metabolites with sparse annotations. With MetaBench, we provide essential infrastructure for developing and evaluating metabolomics AI systems, enabling systematic progress toward reliable computational tools for metabolomics research.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM capabilities in metabolomics domain knowledge and reasoning
Assessing model performance on cross-database identifier grounding challenges
Addressing performance gaps with long-tail metabolites having sparse annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

MetaBench benchmark assesses metabolomics LLM capabilities
Evaluates five key skills: knowledge, understanding, grounding, reasoning, research
Tests identifier grounding and long-tail metabolite performance
🔎 Similar Papers
No similar papers found.
Yuxing Lu
Yuxing Lu
Peking University, PKU-GT-Emory Joint PhD Program
BioMedical AIAI4S
Xukai Zhao
Xukai Zhao
Tsinghua University
Urban PerceptionDeep Learning
J
J. B. Tamo
School of Electrical and Computer Engineering, Georgia Institute of Technology
M
Micky C. Nnamdi
School of Electrical and Computer Engineering, Georgia Institute of Technology
R
Rui Peng
College of Future of Technology, Peking University
Shuang Zeng
Shuang Zeng
Peking University, Georgia Institute of Technology
Self-supervised Contrastive LearningMedical Image SegmentationSuperpixelLarge Language Model
X
Xingyu Hu
School of Computer Science, Georgia Institute of Technology
Jinzhuo Wang
Jinzhuo Wang
Peking University
M
May D. Wang
School of Electrical and Computer Engineering, Georgia Institute of Technology