🤖 AI Summary
Large language models (LLMs) exhibit logical inconsistency in preference judgments, undermining their decision reliability and trustworthiness. To address this, we formally define and quantify three axiomatic properties of logical preference consistency—transitivity, commutativity, and negation invariance—and construct an evaluable benchmark framework. We propose REPAIR, a data optimization method that enhances logical stability through preference distillation and counterfactual augmentation, while preserving alignment with human preferences. Extensive evaluation across multiple LLMs demonstrates that REPAIR improves logical consistency by 27.3% on average, significantly boosting model stability and accuracy in reasoning tasks. Our work provides both theoretical foundations and practical methodologies for developing trustworthy AI decision systems grounded in logically coherent preference modeling.
📝 Abstract
Large Language Models (LLMs) are expected to be predictable and trustworthy to support reliable decision-making systems. Yet current LLMs often show inconsistencies in their judgments. In this work, we examine logical preference consistency as a foundational requirement for building more dependable LLM systems, ensuring stable and coherent decision-making while minimizing erratic or contradictory outputs. To quantify the logical preference consistency, we propose a universal evaluation framework based on three fundamental properties: transitivity, commutativity and negation invariance. Through extensive experimentation across diverse LLMs, we demonstrate that these properties serve as strong indicators of judgment robustness. Furthermore, we introduce a data refinement and augmentation technique, REPAIR, that enhances logical consistency while maintaining alignment with human preferences. Finally, we show that improving consistency leads to better performance in LLM-driven logic-based algorithms, reinforcing stability and coherence in decision-making systems.