π€ AI Summary
This work addresses the limitations of current large language models in real-world clinical settings, where electronic health records (EHRs) are often noisy and unstructured, hindering reliable diagnostic and therapeutic decision-making. To bridge this gap, the authors introduce AgentEHR, a new benchmark that tasks agents with performing complex clinical reasoning directly on raw, high-noise EHR data. They further propose RetroSum, a novel framework that mitigates information loss in long-context EHRs through retrospective summarization and enhances reasoning coherence and domain adaptability via a memory bankβdriven experience evolution strategy. Experimental results demonstrate that RetroSum achieves up to a 29.16% performance improvement over strong baselines and reduces interaction errors by as much as 92.3%, highlighting its effectiveness in handling realistic clinical data.
π Abstract
Large Language Models have demonstrated profound utility in the medical domain. However, their application to autonomous Electronic Health Records~(EHRs) navigation remains constrained by a reliance on curated inputs and simplified retrieval tasks. To bridge the gap between idealized experimental settings and realistic clinical environments, we present AgentEHR. This benchmark challenges agents to execute complex decision-making tasks, such as diagnosis and treatment planning, requiring long-range interactive reasoning directly within raw and high-noise databases. In tackling these tasks, we identify that existing summarization methods inevitably suffer from critical information loss and fractured reasoning continuity. To address this, we propose RetroSum, a novel framework that unifies a retrospective summarization mechanism with an evolving experience strategy. By dynamically re-evaluating interaction history, the retrospective mechanism prevents long-context information loss and ensures unbroken logical coherence. Additionally, the evolving strategy bridges the domain gap by retrieving accumulated experience from a memory bank. Extensive empirical evaluations demonstrate that RetroSum achieves performance gains of up to 29.16% over competitive baselines, while significantly decreasing total interaction errors by up to 92.3%.