SpeakRL: Synergizing Reasoning, Speaking, and Acting in Language Models with Reinforcement Learning

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current human-AI collaboration systems predominantly feature passive agents that lack proactive intent clarification, ambiguity resolution, and joint decision-making capabilities. Method: This paper introduces the “Proactive Dialog Agent” paradigm, leveraging reinforcement learning (RL) to enable language models to dynamically initiate clarifying questions, calibrate user intent, and actively participate in collaborative decision-making during task execution. Contributions/Results: (1) The first RL framework explicitly optimizing for dialog proactivity; (2) An interpretable, multi-dimensional reward mechanism jointly optimizing question quality and task execution efficacy; (3) SpeakER—the first synthetic dataset designed for clarification-driven task solving. We employ PPO with multi-stage reward shaping (clarification necessity, information gain, action efficiency) and task-oriented dialog modeling. Experiments show a 20.14-percentage-point improvement in multi-turn task completion rate over baselines, reduced dialogue turns, and superior performance compared to larger closed-source models.

Technology Category

Application Category

📝 Abstract
Effective human-agent collaboration is increasingly prevalent in real-world applications. Current trends in such collaborations are predominantly unidirectional, with users providing instructions or posing questions to agents, where agents respond directly without seeking necessary clarifications or confirmations. However, the evolving capabilities of these agents require more proactive engagement, where agents should dynamically participate in conversations to clarify user intents, resolve ambiguities, and adapt to changing circumstances. Existing prior work under-utilize the conversational capabilities of language models (LMs), thereby optimizing agents as better followers rather than effective speakers. In this work, we introduce SpeakRL, a reinforcement learning (RL) method that enhances agents' conversational capabilities by rewarding proactive interactions with users, such as asking right clarification questions when necessary. To support this, we curate SpeakER, a synthetic dataset that includes diverse scenarios from task-oriented dialogues, where tasks are resolved through interactive clarification questions. We present a systematic analysis of reward design for conversational proactivity and propose a principled reward formulation for teaching agents to balance asking with acting. Empirical evaluations demonstrate that our approach achieves a 20.14% absolute improvement in task completion over base models without increasing conversation turns even surpassing even much larger proprietary models, demonstrating the promise of clarification-centric user-agent interactions.
Problem

Research questions and friction points this paper is trying to address.

Enhances agents' conversational capabilities through proactive interactions
Teaches agents to balance asking clarification questions with acting
Improves task completion in user-agent collaborations without extra turns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning rewards proactive conversational interactions
Synthetic dataset enables training for clarification question scenarios
Balanced reward design improves task completion without extra turns
🔎 Similar Papers