Aligning with Logic: Measuring, Evaluating and Improving Logical Preference Consistency in Large Language Models

📅 2024-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit logical inconsistency in preference judgments, undermining their decision reliability and trustworthiness. To address this, we formally define and quantify three axiomatic properties of logical preference consistency—transitivity, commutativity, and negation invariance—and construct an evaluable benchmark framework. We propose REPAIR, a data optimization method that enhances logical stability through preference distillation and counterfactual augmentation, while preserving alignment with human preferences. Extensive evaluation across multiple LLMs demonstrates that REPAIR improves logical consistency by 27.3% on average, significantly boosting model stability and accuracy in reasoning tasks. Our work provides both theoretical foundations and practical methodologies for developing trustworthy AI decision systems grounded in logically coherent preference modeling.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are expected to be predictable and trustworthy to support reliable decision-making systems. Yet current LLMs often show inconsistencies in their judgments. In this work, we examine logical preference consistency as a foundational requirement for building more dependable LLM systems, ensuring stable and coherent decision-making while minimizing erratic or contradictory outputs. To quantify the logical preference consistency, we propose a universal evaluation framework based on three fundamental properties: transitivity, commutativity and negation invariance. Through extensive experimentation across diverse LLMs, we demonstrate that these properties serve as strong indicators of judgment robustness. Furthermore, we introduce a data refinement and augmentation technique, REPAIR, that enhances logical consistency while maintaining alignment with human preferences. Finally, we show that improving consistency leads to better performance in LLM-driven logic-based algorithms, reinforcing stability and coherence in decision-making systems.
Problem

Research questions and friction points this paper is trying to address.

Assessing logical consistency in Large Language Models.
Developing a framework to evaluate judgment robustness.
Enhancing decision-making stability in LLM systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal evaluation framework measures logical consistency
REPAIR technique enhances logical consistency
Improved consistency boosts logic-based algorithm performance
🔎 Similar Papers
No similar papers found.