🤖 AI Summary
Metabolomics poses significant challenges for large language models (LLMs) due to pathway complexity, identifier heterogeneity (e.g., HMDB↔ChEBI), and fragmented data—yet no domain-specific benchmark exists to rigorously evaluate LLM capabilities in this field. To address this gap, we introduce MetaBench, the first comprehensive, multi-task evaluation benchmark for metabolomics, assessing five core competencies: knowledge mastery, semantic understanding, factual grounding, logical reasoning, and scientific application. MetaBench leverages authoritative resources (e.g., HMDB, ChEBI, KEGG) to construct a high-quality, expert-validated dataset and integrates retrieval-augmented evaluation protocols. We systematically assess 25 open- and closed-weight LLMs. Results reveal robust performance in general text generation but substantial limitations in cross-database identifier mapping and reasoning over long-tail metabolites—those with low occurrence frequency and sparse annotations. MetaBench establishes the first rigorous, task-diverse standard for evaluating LLMs in metabolomics, providing empirical foundations and methodological guidance for developing domain-optimized AI tools.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities on general text; however, their proficiency in specialized scientific domains that require deep, interconnected knowledge remains largely uncharacterized. Metabolomics presents unique challenges with its complex biochemical pathways, heterogeneous identifier systems, and fragmented databases. To systematically evaluate LLM capabilities in this domain, we introduce MetaBench, the first benchmark for metabolomics assessment. Curated from authoritative public resources, MetaBench evaluates five capabilities essential for metabolomics research: knowledge, understanding, grounding, reasoning, and research. Our evaluation of 25 open- and closed-source LLMs reveals distinct performance patterns across metabolomics tasks: while models perform well on text generation tasks, cross-database identifier grounding remains challenging even with retrieval augmentation. Model performance also decreases on long-tail metabolites with sparse annotations. With MetaBench, we provide essential infrastructure for developing and evaluating metabolomics AI systems, enabling systematic progress toward reliable computational tools for metabolomics research.