DiscussLLM: Teaching Large Language Models When to Speak

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) engage passively in multi-turn dialogues, lacking proactive intervention awareness, which severely limits their collaborative capability. Method: This paper proposes the first LLM-based framework for modeling proactive speaking timing in dynamic dialogues. It introduces a large-scale annotated dialogue dataset with five AI intervention categories and a “silence token” mechanism to explicitly represent non-intervention decisions. A two-stage data generation pipeline is designed—combining end-to-end modeling with a decoupled classifier-generator architecture—and optimized for low-latency inference. Contribution/Results: Experiments demonstrate that our method accurately identifies high-value intervention opportunities, significantly improving dialogue quality and contextual awareness across diverse collaborative scenarios. To our knowledge, this is the first systematic solution to the problem of timely and effective proactive participation of LLMs in human dialogues.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in understanding and generating human-like text, yet they largely operate as reactive agents, responding only when directly prompted. This passivity creates an "awareness gap," limiting their potential as truly collaborative partners in dynamic human discussions. We introduce $ extit{DiscussLLM}$, a framework designed to bridge this gap by training models to proactively decide not just $ extit{what}$ to say, but critically, $ extit{when}$ to speak. Our primary contribution is a scalable two-stage data generation pipeline that synthesizes a large-scale dataset of realistic multi-turn human discussions. Each discussion is annotated with one of five intervention types (e.g., Factual Correction, Concept Definition) and contains an explicit conversational trigger where an AI intervention adds value. By training models to predict a special silent token when no intervention is needed, they learn to remain quiet until a helpful contribution can be made. We explore two architectural baselines: an integrated end-to-end model and a decoupled classifier-generator system optimized for low-latency inference. We evaluate these models on their ability to accurately time interventions and generate helpful responses, paving the way for more situationally aware and proactive conversational AI.
Problem

Research questions and friction points this paper is trying to address.

Teaching LLMs to decide when to speak proactively
Bridging the awareness gap in reactive AI dialogue systems
Enabling AI interventions at valuable conversational triggers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage data generation pipeline
Predict silent token when not needed
Integrated end-to-end and decoupled architectures
🔎 Similar Papers
No similar papers found.
D
Deep Anil Patel
NEC Laboratories America
I
Iain Melvin
NEC Laboratories America
C
Christopher Malon
NEC Laboratories America
Martin Renqiang Min
Martin Renqiang Min
Department Head of Machine Learning, NEC Laboratories America
Generative ModelsRepresentationMultimodal ReasoningGenerative BiomedicineAI4Health