🤖 AI Summary
Existing vision-language models struggle to automate complex, long-horizon software tasks within medical information systems—such as EHRs and DICOM viewers—due to insufficient multi-step interaction and long-range reasoning capabilities. To address this limitation, this work proposes CarePilot, a multi-agent framework grounded in the Actor-Critic paradigm that integrates dual memory mechanisms (long-term experience memory and short-term task memory) with tool grounding to enable iterative human-in-the-loop simulation. We introduce CareFlow, the first high-quality, long-horizon human-computer interaction benchmark tailored to the medical domain, and demonstrate CarePilot’s superior performance on both CareFlow and out-of-distribution datasets, surpassing the strongest closed-source and open-source multimodal baselines by 15.26% and 3.38%, respectively, thereby substantially advancing long-horizon task execution in clinical software environments.
📝 Abstract
Multimodal agentic pipelines are transforming human-computer interaction by enabling efficient and accessible automation of complex, real-world tasks. However, recent efforts have focused on short-horizon or general-purpose applications (e.g., mobile or desktop interfaces), leaving long-horizon automation for domain-specific systems, particularly in healthcare, largely unexplored. To address this, we introduce CareFlow, a high-quality human-annotated benchmark comprising complex, long-horizon software workflows across medical annotation tools, DICOM viewers, EHR systems, and laboratory information systems. On this benchmark, existing vision-language models (VLMs) perform poorly, struggling with long-horizon reasoning and multi-step interactions in medical contexts. To overcome this, we propose CarePilot, a multi-agent framework based on the actor-critic paradigm. The Actor integrates tool grounding with dual-memory mechanisms (long-term and short-term experience) to predict the next semantic action from the visual interface and system state. The Critic evaluates each action, updates memory based on observed effects, and either executes or provides corrective feedback to refine the workflow. Through iterative agentic simulation, the Actor learns to perform more robust and reasoning-aware predictions during inference. Our experiments show that CarePilot achieves state-of-the-art performance, outperforming strong closed-source and open-source multimodal baselines by approximately 15.26% and 3.38%, respectively, on our benchmark and out-of-distribution dataset.