π€ AI Summary
This work addresses the lack of scalable, longitudinal, multi-source temporal benchmarks with structured ground truth for evaluating health-focused intelligent agents. The authors propose the first synthetic longitudinal health benchmark that integrates large language model (LLM)-driven semantic planning with algorithmic physiological simulation. It comprises trajectories of 100 virtual users (each spanning 1β5 years), including health records, narrative plans, wearable device data, clinical examination logs, and event logs annotated with explicit causal parameters. Accompanying the dataset are 500 computable ground-truth queries organized into three difficulty tiers. The benchmark supports five reasoning tasks emphasizing multi-hop inference and evidence attribution. Evaluations across 13 methods demonstrate that database-native agents achieve significantly higher accuracy (48β58%) compared to memory-augmented RAG baselines (30β38%), validating the benchmarkβs utility for complex health reasoning.
π Abstract
Longitudinal health agents must reason across multi-source trajectories that combine continuous device streams, sparse clinical exams, and episodic life events - yet evaluating them is hard: real-world data cannot be released at scale, and temporally grounded attribution questions seldom admit definitive answers without structured ground truth. We present ESL-Bench, an event-driven synthesis framework and benchmark providing 100 synthetic users, each with a 1-5 year trajectory comprising a health profile, a multi-phase narrative plan, daily device measurements, periodic exam records, and an event log with explicit per-indicator impact parameters. Each indicator follows a baseline stochastic process driven by discrete events with sigmoid-onset, exponential-decay kernels under saturation and projection constraints; a hybrid pipeline delegates sparse semantic artifacts to LLM-based planning and dense indicator dynamics to algorithmic simulation with hard physiological bounds. Users are each paired with 100 evaluation queries across five dimensions - Lookup, Trend, Comparison, Anomaly, Explanation - stratified into Easy, Medium, and Hard tiers, with all ground-truth answers programmatically computable from the recorded event-indicator relationships. Evaluating 13 methods spanning LLMs with tools, DB-native agents, and memory-augmented RAG, we find that DB agents (48-58%) substantially outperform memory RAG baselines (30-38%), with the gap concentrated on Comparison and Explanation queries where multi-hop reasoning and evidence attribution are required.