🤖 AI Summary
Current single-turn evaluation paradigms fail to uncover the degradation in factuality of large language models (LLMs) as dialogue length increases or prompting strategies vary in multi-turn interactions. This work addresses this gap by constructing simulated multi-turn dialogues based on the BoolQ dataset, systematically controlling the number of turns and prompting strategies to evaluate the factuality of three prominent LLMs. The study reveals that all models exhibit significant length-dependent and prompting-sensitive declines in factuality, highlighting the limitations of static, single-turn assessments in real-world deployment scenarios. Furthermore, it uncovers model-specific vulnerability patterns for the first time, offering a novel paradigm for evaluating the reliability of multi-turn dialogue systems.
📝 Abstract
Single-prompt evaluations dominate current LLM benchmarking, yet they fail to capture the conversational dynamics where real-world harm occurs. In this study, we examined whether conversation length affects response veracity by evaluating LLM performance on the BoolQ dataset under varying length and scaffolding conditions. Our results across three distinct LLMs revealed model-specific vulnerabilities that are invisible under single-turn testing. The length-dependent and scaffold-specific effects we observed demonstrate a fundamental limitation of static evaluations, as deployment-relevant vulnerabilities could only be spotted in a multi-turn conversational setting.