Firm or Fickle? Evaluating Large Language Models Consistency in Sequential Interactions

πŸ“… 2025-03-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses response inconsistency in large language models (LLMs) during multi-turn interactions in high-stakes domains. Methodologically, it (1) introduces Position-Weighted Consistency (PWC) scoring to quantify response stability across dialogue turns more precisely; (2) constructs the first cross-domain, multi-difficulty consistency benchmark dataset; and (3) proposes CARGβ€”a confidence-aware response generation framework integrating confidence calibration with controllable decoding. Empirical evaluation demonstrates that CARG improves consistency by +28.6% in PWC score while preserving original task accuracy, thereby enhancing reliability for deployment in critical applications such as healthcare and finance.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) have shown remarkable capabilities across various tasks, but their deployment in high-stake domains requires consistent performance across multiple interaction rounds. This paper introduces a comprehensive framework for evaluating and improving LLM response consistency, making three key contributions. First, we propose a novel Position-Weighted Consistency (PWC) score that captures both the importance of early-stage stability and recovery patterns in multi-turn interactions. Second, we present a carefully curated benchmark dataset spanning diverse domains and difficulty levels, specifically designed to evaluate LLM consistency under various challenging follow-up scenarios. Third, we introduce Confidence-Aware Response Generation (CARG), a framework that significantly improves response stability by incorporating model confidence signals into the generation process. Empirical results demonstrate that CARG significantly improves response stability without sacrificing accuracy, underscoring its potential for reliable LLM deployment in critical applications.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM consistency in multi-turn interactions
Developing metrics for stability and recovery patterns
Improving response stability without accuracy loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Position-Weighted Consistency score for multi-turn interactions
Benchmark dataset for diverse challenging follow-up scenarios
Confidence-Aware Response Generation framework improves stability
πŸ”Ž Similar Papers
No similar papers found.