🤖 AI Summary
This work addresses the challenge that current large language model (LLM) agents struggle to handle realistic user interruptions—such as mid-task goal modifications, new requests, or instruction withdrawals—in long-horizon, dynamic web navigation scenarios. The study formally defines three types of real-world user interruptions (addition, revision, and withdrawal) and introduces a high-quality synthetic method for generating interruption scenarios under strict semantic constraints. Building upon WebArena-Lite, the authors construct InterruptBench, the first benchmark specifically designed to evaluate LLMs’ interruptibility in tasks involving persistent state changes. Using a unified simulation framework, they assess mainstream LLMs’ adaptability and recovery efficiency under both single- and multi-turn interruptions. Experimental results reveal that existing models still face significant difficulties in multi-turn interruption settings.
📝 Abstract
As LLM agents transition from short, static problem solving to executing complex, long-horizon tasks in dynamic environments, the ability to handle user interruptions, such as adding requirement or revising goals, during mid-task execution is becoming a core requirement for realistic deployment. However, existing benchmarks largely assume uninterrupted agent behavior or study interruptions only in short, unconstrained language tasks. In this paper, we present the first systematic study of interruptible agents in long-horizon, environmentally grounded web navigation tasks, where actions induce persistent state changes. We formalize three realistic interruption types, including addition, revision, and retraction, and introduce InterruptBench, a benchmark derived from WebArena-Lite that synthesizes high-quality interruption scenarios under strict semantic constraints. Using a unified interruption simulation framework, we evaluate six strong LLM backbones across single- and multi-turn interruption settings, analyzing both their effectiveness in adapting to updated intents and their efficiency in recovering from mid-task changes. Our results show that handling user interruptions effectively and efficiently during long-horizon agentic tasks remains challenging for powerful large-scale LLMs. Code and dataset are available at https://github.com/HenryPengZou/InterruptBench.