ESL-Bench: An Event-Driven Synthetic Longitudinal Benchmark for Health Agents

πŸ“… 2026-04-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the lack of scalable, longitudinal, multi-source temporal benchmarks with structured ground truth for evaluating health-focused intelligent agents. The authors propose the first synthetic longitudinal health benchmark that integrates large language model (LLM)-driven semantic planning with algorithmic physiological simulation. It comprises trajectories of 100 virtual users (each spanning 1–5 years), including health records, narrative plans, wearable device data, clinical examination logs, and event logs annotated with explicit causal parameters. Accompanying the dataset are 500 computable ground-truth queries organized into three difficulty tiers. The benchmark supports five reasoning tasks emphasizing multi-hop inference and evidence attribution. Evaluations across 13 methods demonstrate that database-native agents achieve significantly higher accuracy (48–58%) compared to memory-augmented RAG baselines (30–38%), validating the benchmark’s utility for complex health reasoning.
πŸ“ Abstract
Longitudinal health agents must reason across multi-source trajectories that combine continuous device streams, sparse clinical exams, and episodic life events - yet evaluating them is hard: real-world data cannot be released at scale, and temporally grounded attribution questions seldom admit definitive answers without structured ground truth. We present ESL-Bench, an event-driven synthesis framework and benchmark providing 100 synthetic users, each with a 1-5 year trajectory comprising a health profile, a multi-phase narrative plan, daily device measurements, periodic exam records, and an event log with explicit per-indicator impact parameters. Each indicator follows a baseline stochastic process driven by discrete events with sigmoid-onset, exponential-decay kernels under saturation and projection constraints; a hybrid pipeline delegates sparse semantic artifacts to LLM-based planning and dense indicator dynamics to algorithmic simulation with hard physiological bounds. Users are each paired with 100 evaluation queries across five dimensions - Lookup, Trend, Comparison, Anomaly, Explanation - stratified into Easy, Medium, and Hard tiers, with all ground-truth answers programmatically computable from the recorded event-indicator relationships. Evaluating 13 methods spanning LLMs with tools, DB-native agents, and memory-augmented RAG, we find that DB agents (48-58%) substantially outperform memory RAG baselines (30-38%), with the gap concentrated on Comparison and Explanation queries where multi-hop reasoning and evidence attribution are required.
Problem

Research questions and friction points this paper is trying to address.

longitudinal health agents
synthetic benchmark
event-driven data
multi-source trajectories
evaluation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

event-driven synthesis
longitudinal health agents
hybrid simulation pipeline
structured ground truth
multi-hop reasoning benchmark
πŸ”Ž Similar Papers
No similar papers found.
C
Chao Li
Shanda Group
C
Cailiang Liu
Shanda Group
A
Ang Gao
Shanda Group
K
Kexin Deng
Shanda Group
S
Shu Zhang
Shanda Group
L
Langping Xu
Shanda Group
X
Xiaotong Shi
Shanda Group
X
Xionghao Ding
Shanda Group
Jian Pei
Jian Pei
Arthur S. Pearse Distinguished Professor, Duke University
Data miningbig data analyticsdatabase systemsinformation retrieval
X
Xun Jiang
Shanda Group