Semantic Consistency for Assuring Reliability of Large Language Models

📅 2023-08-17
🏛️ arXiv.org
📈 Citations: 13
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) often produce inconsistent outputs under semantically equivalent prompts, undermining deployment reliability. To address this, we propose the first general semantic consistency metric framework for open-ended text generation and introduce a novel Ask-to-Choose (A2C) prompting strategy to enhance response stability. Methodologically, our approach integrates semantic similarity modeling, contrastive prompt engineering, and systematic analysis of answer variants from TruthfulQA, validated through rigorous human evaluation. Experimental results demonstrate that our metric achieves significantly higher correlation with human judgments than conventional token-level metrics (e.g., BLEU, ROUGE). Furthermore, A2C improves closed-book QA accuracy by up to 47% and boosts semantic consistency of instruction-tuned models by up to 7×. This work establishes a scalable theoretical foundation and practical methodology for evaluating and controlling LLM reliability in real-world applications.
📝 Abstract
Large Language Models (LLMs) exhibit remarkable fluency and competence across various natural language tasks. However, recent research has highlighted their sensitivity to variations in input prompts. To deploy LLMs in a safe and reliable manner, it is crucial for their outputs to be consistent when prompted with expressions that carry the same meaning or intent. While some existing work has explored how state-of-the-art LLMs address this issue, their evaluations have been confined to assessing lexical equality of single- or multi-word answers, overlooking the consistency of generative text sequences. For a more comprehensive understanding of the consistency of LLMs in open-ended text generation scenarios, we introduce a general measure of semantic consistency, and formulate multiple versions of this metric to evaluate the performance of various LLMs. Our proposal demonstrates significantly higher consistency and stronger correlation with human evaluations of output consistency than traditional metrics based on lexical consistency. Finally, we propose a novel prompting strategy, called Ask-to-Choose (A2C), to enhance semantic consistency. When evaluated for closed-book question answering based on answer variations from the TruthfulQA benchmark, A2C increases accuracy metrics for pretrained and finetuned LLMs by up to 47%, and semantic consistency metrics for instruction-tuned models by up to 7-fold.
Problem

Research questions and friction points this paper is trying to address.

Ensuring semantic consistency in LLM outputs for reliable deployment
Evaluating generative text sequence consistency beyond lexical equality
Improving LLM accuracy and consistency via novel prompting strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces general semantic consistency measure
Proposes Ask-to-Choose prompting strategy
Enhances LLM accuracy and consistency metrics
🔎 Similar Papers
No similar papers found.