Reading Between the Lines: The One-Sided Conversation Problem

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the practical constraint in privacy-sensitive domains—such as telemedicine and call centers—where only unidirectional dialogue transcripts (i.e., one speaker’s utterances) are available, formalizing it as the One-Sided Conversation (1SC) problem. It focuses on two core tasks: missing-turn reconstruction and one-sided dialogue summarization. Methodologically, it introduces a novel one-sided dialogue paradigm, demonstrating that incorporating the next-turn content and length information significantly improves reconstruction fidelity; proposes a placeholder-based prompting mechanism to mitigate LLM hallucination; and—crucially—establishes for the first time that high-quality summaries can be generated *without explicit turn reconstruction*. The approach integrates prompt engineering with model fine-tuning. Experiments on MultiWOZ, DailyDialog, and Candor show that large models achieve reasonable reconstruction via prompting alone, while smaller models require fine-tuning; remarkably, the summarization task achieves state-of-the-art performance even under zero-reconstruction conditions.

Technology Category

Application Category

📝 Abstract
Conversational AI is constrained in many real-world settings where only one side of a dialogue can be recorded, such as telemedicine, call centers, and smart glasses. We formalize this as the one-sided conversation problem (1SC): inferring and learning from one side of a conversation. We study two tasks: (1) reconstructing the missing speaker's turns for real-time use cases, and (2) generating summaries from one-sided transcripts. Evaluating prompting and finetuned models on MultiWOZ, DailyDialog, and Candor with both human A/B testing and LLM-as-a-judge metrics, we find that access to one future turn and information about utterance length improves reconstruction, placeholder prompting helps to mitigate hallucination, and while large models generate promising reconstructions with prompting, smaller models require finetuning. Further, high-quality summaries can be generated without reconstructing missing turns. We present 1SC as a novel challenge and report promising results that mark a step toward privacy-aware conversational AI.
Problem

Research questions and friction points this paper is trying to address.

Infer missing dialogue turns from one-sided conversations
Generate summaries directly from partial conversation transcripts
Address privacy constraints in real-world conversational AI applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inferring missing dialogue turns from one-sided conversations
Using placeholder prompting to reduce hallucination in reconstructions
Generating summaries directly from partial transcripts without reconstruction
🔎 Similar Papers
No similar papers found.
V
Victoria Ebert
Paul G. Allen School of Computer Science & Engineering, University of Washington
R
Rishabh Singh
Paul G. Allen School of Computer Science & Engineering, University of Washington
Tuochao Chen
Tuochao Chen
University of Washington
Speech AI
Noah A. Smith
Noah A. Smith
University of Washington; Allen Institute for Artificial Intelligence
natural language processingmachine learningcomputational social sciencecomputer music
S
Shyamnath Gollakota
Paul G. Allen School of Computer Science & Engineering, University of Washington; Hearvana AI