🤖 AI Summary
Lockean belief sets—defined via a confidence threshold—are typically not closed under classical logical consequence, limiting their applicability in belief change. This paper first characterizes the necessary and sufficient conditions for deductive closure of Lockean belief sets, providing two formal representations. Second, it proposes a belief update mechanism grounded in minimal probabilistic distance: upon receiving new information, beliefs are revised to preserve logical consistency while minimizing revision cost. The approach integrates probabilistic logic, AGM belief revision, and formal semantic analysis, and rigorously establishes that the mechanism achieves a balance between update stability and minimal perturbation—under the constraint of maintaining deductive closure. The resulting framework advances probabilistic belief modeling by unifying logical coherence with dynamic adaptability.
📝 Abstract
Within the formal setting of the Lockean thesis, an agent belief set is defined in terms of degrees of confidence and these are described in probabilistic terms. This approach is of established interest, notwithstanding some limitations that make its use troublesome in some contexts, like, for instance, in belief change theory. Precisely, Lockean belief sets are not generally closed under (classical) logical deduction. The aim of the present paper is twofold: on one side we provide two characterizations of those belief sets that are closed under classical logic deduction, and on the other we propose an approach to probabilistic update that allows us for a minimal revision of those beliefs, i.e., a revision obtained by making the fewest possible changes to the existing belief set while still accommodating the new information. In particular, we show how we can deductively close a belief set via a minimal revision.