🤖 AI Summary
Current large language models (LLMs) engage passively in multi-turn dialogues, lacking proactive intervention awareness, which severely limits their collaborative capability. Method: This paper proposes the first LLM-based framework for modeling proactive speaking timing in dynamic dialogues. It introduces a large-scale annotated dialogue dataset with five AI intervention categories and a “silence token” mechanism to explicitly represent non-intervention decisions. A two-stage data generation pipeline is designed—combining end-to-end modeling with a decoupled classifier-generator architecture—and optimized for low-latency inference. Contribution/Results: Experiments demonstrate that our method accurately identifies high-value intervention opportunities, significantly improving dialogue quality and contextual awareness across diverse collaborative scenarios. To our knowledge, this is the first systematic solution to the problem of timely and effective proactive participation of LLMs in human dialogues.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in understanding and generating human-like text, yet they largely operate as reactive agents, responding only when directly prompted. This passivity creates an "awareness gap," limiting their potential as truly collaborative partners in dynamic human discussions. We introduce $ extit{DiscussLLM}$, a framework designed to bridge this gap by training models to proactively decide not just $ extit{what}$ to say, but critically, $ extit{when}$ to speak. Our primary contribution is a scalable two-stage data generation pipeline that synthesizes a large-scale dataset of realistic multi-turn human discussions. Each discussion is annotated with one of five intervention types (e.g., Factual Correction, Concept Definition) and contains an explicit conversational trigger where an AI intervention adds value. By training models to predict a special silent token when no intervention is needed, they learn to remain quiet until a helpful contribution can be made. We explore two architectural baselines: an integrated end-to-end model and a decoupled classifier-generator system optimized for low-latency inference. We evaluate these models on their ability to accurately time interventions and generate helpful responses, paving the way for more situationally aware and proactive conversational AI.