Learning to Interrupt in Language-based Multi-agent Communication

πŸ“… 2026-04-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges of redundant communication in multi-agent language systems, which often leads to context overload and high computational costs while failing to dynamically adapt to listeners’ needs. The study introduces, for the first time, a listener-driven interruptible communication framework inspired by human conversational interruption mechanisms. In this framework, a listener can interrupt the speaker at an optimal moment based on predictions of future task rewards and communication costs, thereby enabling dynamic communication compression. By integrating large language model prompting with reinforcement learning strategies, the proposed method reduces communication costs by 32.2% on average across diverse collaborative tasks, while maintaining or even improving task performance. Moreover, it demonstrates strong generalization capabilities across different agents and task settings.
πŸ“ Abstract
Multi-agent systems using large language models (LLMs) have demonstrated impressive capabilities across various domains. However, current agent communication suffers from verbose output that overload context and increase computational costs. Although existing approaches focus on compressing the message from the speaker side, they struggle to adapt to different listeners and identify relevant information. An effective way in human communication is to allow the listener to interrupt and express their opinion or ask for clarification. Motivated by this, we propose an interruptible communication framework that allows the agent who is listening to interrupt the current speaker. Through prompting experiments, we find that current LLMs are often overconfident and interrupt before receiving enough information. Therefore, we propose a learning method that predicts the appropriate interruption points based on the estimated future reward and cost. We evaluate our framework across various multi-agent scenarios, including 2-agent text pictionary games, 3-agent meeting scheduling, and 3-agent debate. The results of the experiment show that our HANDRAISER can reduce the communication cost by 32.2% compared to the baseline with comparable or superior task performance. This learned interruption behavior can also be generalized to different agents and tasks.
Problem

Research questions and friction points this paper is trying to address.

multi-agent communication
language-based agents
communication efficiency
interruption mechanism
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

interruptible communication
multi-agent systems
large language models
communication efficiency
reinforcement learning
πŸ”Ž Similar Papers
No similar papers found.