EHRStruct: A Comprehensive Benchmark Framework for Evaluating Large Language Models on Structured Electronic Health Record Tasks

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluations of large language models (LLMs) on structured electronic health records (EHRs) lack standardized benchmarks and well-defined clinical tasks. Method: We introduce EHRStruct—the first comprehensive benchmark for structured EHRs—comprising 11 multi-granularity clinical reasoning tasks and 2,200 high-quality samples. Leveraging this framework, we systematically evaluate 20 mainstream LLMs, analyzing the impact of input formatting, few-shot learning, and fine-tuning strategies, and propose a code-augmented reasoning mechanism to enhance structured-data comprehension. Contribution/Results: Experiments reveal persistent limitations of current LLMs in complex clinical logical reasoning. Our proposed EHRMaster method outperforms 11 state-of-the-art structured-data enhancement techniques across all EHRStruct tasks, demonstrating that joint optimization of task standardization and reasoning augmentation significantly improves clinical reasoning performance.

Technology Category

Application Category

📝 Abstract
Structured Electronic Health Record (EHR) data stores patient information in relational tables and plays a central role in clinical decision-making. Recent advances have explored the use of large language models (LLMs) to process such data, showing promise across various clinical tasks.However, the absence of standardized evaluation frameworks and clearly defined tasks makes it difficult to systematically assess and compare LLM performance on structured EHR data.To address these evaluation challenges, we introduce EHRStruct, a benchmark specifically designed to evaluate LLMs on structured EHR tasks.EHRStruct defines 11 representative tasks spanning diverse clinical needs and includes 2,200 task-specific evaluation samples derived from two widely used EHR datasets.We use EHRStruct to evaluate 20 advanced and representative LLMs, covering both general and medical models.We further analyze key factors influencing model performance, including input formats, few-shot generalisation, and finetuning strategies, and compare results with 11 state-of-the-art LLM-based enhancement methods for structured data reasoning. Our results indicate that many structured EHR tasks place high demands on the understanding and reasoning capabilities of LLMs.In response, we propose EHRMaster, a code-augmented method that achieves state-of-the-art performance and offers practical
Problem

Research questions and friction points this paper is trying to address.

Standardized evaluation framework lacking for EHR tasks
Systematic assessment difficulty for LLMs on structured EHR
Benchmark development for clinical decision-making with EHR data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes EHRStruct benchmark for structured EHR evaluation
Introduces EHRMaster code-augmented method for enhanced performance
Evaluates 20 LLMs across 11 clinical tasks systematically
🔎 Similar Papers
No similar papers found.
X
Xiao Yang
College of Computing and Data Science, Nanyang Technological University (NTU), Singapore
X
Xuejiao Zhao
Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY), NTU, Singapore
Zhiqi Shen
Zhiqi Shen
Nanyang Technological University
Goal ModelingSoftware AgentsIntelligent AgentsHealth GamesEducational Games