How Do People Revise Inconsistent Beliefs? Examining Belief Revision in Humans with User Studies

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how humans revise preexisting beliefs when confronted with contradictory new information, aiming to improve the cognitive fidelity of AI systems’ models of human reasoning. Addressing the gap between traditional belief revision theories—which prioritize logically minimal change but lack empirical cognitive support—and actual human behavior, we conducted three structured user studies involving contextualized reasoning tasks, structured questionnaires, and mixed-method analysis. Results reveal a robust human preference for **explanation-driven revision**: individuals consistently retain beliefs that causally explain contradictory evidence, even at the cost of non-minimal logical alteration. This finding provides the first systematic empirical evidence for explanation as a central cognitive principle in belief revision, challenging the long-standing minimality assumption and proposing a new cognitively grounded revision paradigm. The work delivers empirically validated design principles and novel operational operators for explainable AI (XAI), demonstrating robustness across diverse inconsistency scenarios.

Technology Category

Application Category

📝 Abstract
Understanding how humans revise their beliefs in light of new information is crucial for developing AI systems which can effectively model, and thus align with, human reasoning. While theoretical belief revision frameworks rely on a set of principles that establish how these operations are performed, empirical evidence from cognitive psychology suggests that people may follow different patterns when presented with conflicting information. In this paper, we present three comprehensive user studies showing that people consistently prefer explanation-based revisions, i.e., those which are guided by explanations, that result in changes to their belief systems that are not necessarily captured by classical belief change theory. Our experiments systematically investigate how people revise their beliefs with explanations for inconsistencies, whether they are provided with them or left to formulate them themselves, demonstrating a robust preference for what may seem non-minimal revisions across different types of scenarios. These findings have implications for AI systems designed to model human reasoning or interact with humans, suggesting that such systems should accommodate explanation-based, potentially non-minimal belief revision operators to better align with human cognitive processes.
Problem

Research questions and friction points this paper is trying to address.

How humans revise inconsistent beliefs with new information
Examining explanation-based belief revision in humans
Implications for AI systems modeling human reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explanation-based belief revision in humans
User studies on non-minimal belief changes
AI systems aligning with human reasoning
🔎 Similar Papers
No similar papers found.