Is Length Really A Liability? An Evaluation of Multi-turn LLM Conversations using BoolQ

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current single-turn evaluation paradigms fail to uncover the degradation in factuality of large language models (LLMs) as dialogue length increases or prompting strategies vary in multi-turn interactions. This work addresses this gap by constructing simulated multi-turn dialogues based on the BoolQ dataset, systematically controlling the number of turns and prompting strategies to evaluate the factuality of three prominent LLMs. The study reveals that all models exhibit significant length-dependent and prompting-sensitive declines in factuality, highlighting the limitations of static, single-turn assessments in real-world deployment scenarios. Furthermore, it uncovers model-specific vulnerability patterns for the first time, offering a novel paradigm for evaluating the reliability of multi-turn dialogue systems.

Technology Category

Application Category

📝 Abstract
Single-prompt evaluations dominate current LLM benchmarking, yet they fail to capture the conversational dynamics where real-world harm occurs. In this study, we examined whether conversation length affects response veracity by evaluating LLM performance on the BoolQ dataset under varying length and scaffolding conditions. Our results across three distinct LLMs revealed model-specific vulnerabilities that are invisible under single-turn testing. The length-dependent and scaffold-specific effects we observed demonstrate a fundamental limitation of static evaluations, as deployment-relevant vulnerabilities could only be spotted in a multi-turn conversational setting.
Problem

Research questions and friction points this paper is trying to address.

multi-turn conversation
LLM evaluation
response veracity
BoolQ
conversation length
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-turn evaluation
conversational dynamics
length-dependent vulnerability
LLM benchmarking
scaffolding effects
K
Karl Neergaard
Independent Researcher
L
Le Qiu
The Hong Kong Polytechnic University
Emmanuele Chersoni
Emmanuele Chersoni
Hong Kong Polytechnic University
Computational Linguistics