🤖 AI Summary
Existing task-oriented dialogue (TOD) models are predominantly trained on manually curated, written-language datasets and thus struggle with real-world spoken dialogue challenges—including ASR errors, word-level disfluencies, cross-turn coreference, and implicit reasoning. To address this gap, we introduce SpokenWOZ, the first large-scale human-to-human speech-text TOD dataset: covering 8 domains, 5.7K spoken dialogues, and 249 hours of high-fidelity audio, it systematically models spoken interaction characteristics. We propose two novel tasks—cross-turn slot detection and reasoning-based slot detection—and support evaluation in text-only, speech+text multimodal, and LLM-based settings. Experiments reveal that state-of-the-art dialogue state tracking (DST) models achieve only 25.65% joint goal accuracy, while end-to-end task completion reaches merely 52.1%, underscoring the substantial difficulty of spoken TOD modeling and establishing SpokenWOZ as a critical benchmark for future research.
📝 Abstract
Task-oriented dialogue (TOD) models have made significant progress in recent years. However, previous studies primarily focus on datasets written by annotators, which has resulted in a gap between academic research and real-world spoken conversation scenarios. While several small-scale spoken TOD datasets are proposed to address robustness issues such as ASR errors, they ignore the unique challenges in spoken conversation. To tackle the limitations, we introduce SpokenWOZ, a large-scale speech-text dataset for spoken TOD, containing 8 domains, 203k turns, 5.7k dialogues and 249 hours of audios from human-to-human spoken conversations. SpokenWOZ further incorporates common spoken characteristics such as word-by-word processing and reasoning in spoken language. Based on these characteristics, we present cross-turn slot and reasoning slot detection as new challenges. We conduct experiments on various baselines, including text-modal models, newly proposed dual-modal models, and LLMs, e.g., ChatGPT. The results show that the current models still have substantial room for improvement in spoken conversation, where the most advanced dialogue state tracker only achieves 25.65% in joint goal accuracy and the SOTA end-to-end model only correctly completes the user request in 52.1% of dialogues. The dataset, code, and leaderboard are available: https://spokenwoz.github.io/.