🤖 AI Summary
Large language models excel in isolated reasoning tasks, yet their reliability in task-oriented dialogues remains unclear. This work presents the first systematic evaluation of how multi-turn dialogue affects model reasoning capabilities, introducing BOULDER—a dynamic benchmark encompassing eight travel-related task categories. Designed with a dual-modality structure (isolated vs. conversational), BOULDER integrates arithmetic, spatial, temporal, commonsense, and formal reasoning, while incorporating role assignments and tool-use constraints. Experiments across eight mainstream large language models reveal a significant degradation in reasoning performance within dialogue settings, primarily attributable to the multi-turn interaction structure, with secondary influences from role-specific instructions and tool invocation requirements.
📝 Abstract
Large Language Models (LLMs) achieve strong performance on many reasoning benchmarks, yet these evaluations typically focus on isolated tasks that differ from real-world usage in task-oriented dialogue (TOD). In this setting, LLMs must perform reasoning inherently while generating text and adhering to instructions on role, format, and style. This mismatch raises concerns about whether benchmark performance accurately reflects models' reasoning robustness in TOD setting. We investigate how framing reasoning tasks within TOD affects LLM performance by introducing BOULDER, a new dynamic benchmark covering eight travel-related tasks that require arithmetic, spatial, and temporal reasoning with both commonsense and formal aspects. Each problem is presented in both isolated and dialogue-based variants, enabling controlled comparison while mitigating data contamination. Experiments on eight LLMs reveal a substantial and consistent performance gap between isolated and dialogue settings. Through ablations and qualitative analysis, we show that this gap is largely driven by the multi-turn nature of dialogue, with additional effects from role conditioning and tool-use requirements. Our results highlight the need to evaluate LLM reasoning in realistic interactive scenarios.