Who is Afraid of Minimal Revision?

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the limited learnability of minimal revision—a core operation in belief revision—by investigating under which conditions it remains effectively learnable. Adopting a formal belief revision framework integrated with computational learning theory, we analyze the behavior of minimal revision, conditionalization, and lexicographic upgrade under multi-possibility hypotheses. We establish that minimal revision achieves successful learning for finitely identifiable problem classes in both positive and negative data settings; its feasibility hinges on the existence of a suitably constrained prior plausibility ordering. However, when incoming information may be erroneous, several positive results break down. Crucially, this work provides the first systematic characterization of the learnability boundaries of minimal revision, furnishing concrete, operationally verifiable prior conditions. These findings bridge a critical gap between minimal revision and stronger learning mechanisms, advancing the theoretical foundations of rational belief change under uncertainty.

Technology Category

Application Category

📝 Abstract
The principle of minimal change in belief revision theory requires that, when accepting new information, one keeps one's belief state as close to the initial belief state as possible. This is precisely what the method known as minimal revision does. However, unlike less conservative belief revision methods, minimal revision falls short in learning power: It cannot learn everything that can be learned by other learning methods. We begin by showing that, despite this limitation, minimal revision is still a successful learning method in a wide range of situations. Firstly, it can learn any problem that is finitely identifiable. Secondly, it can learn with positive and negative data, as long as one considers finitely many possibilities. We then characterize the prior plausibility assignments (over finitely many possibilities) that enable one to learn via minimal revision, and do the same for conditioning and lexicographic upgrade. Finally, we show that not all of our results still hold when learning from possibly erroneous information.
Problem

Research questions and friction points this paper is trying to address.

Minimal revision's learning power compared to other methods.
Conditions enabling learning via minimal revision and related methods.
Impact of erroneous information on minimal revision's effectiveness.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Minimal revision learns finitely identifiable problems effectively.
Works with both positive and negative data constraints.
Characterizes plausibility assignments for minimal revision learning.
🔎 Similar Papers
No similar papers found.