🤖 AI Summary
This paper investigates how agents subjectively rank and verify event credibility based on observational cues. Addressing cue-driven nonmonotonic inference, it constructs a qualitative probability framework satisfying Villegas’s axioms to formally characterize the structural relationship among cues, inferences, and credibility assessments. Methodologically, the work integrates qualitative probability theory, conditional modeling, and symbolic measure analysis. Its two principal contributions are: (i) a proof of existence and uniqueness of a normalized symbolic measure fully representing the underlying inference structure; and (ii) the demonstration of Bayesian decomposability—namely, that posterior credibility differences decompose precisely into prior differences plus cue-specific correction terms—thereby rigorously exposing the fundamental limitation that “all hypotheses may be incorrect.” The framework provides a novel foundation for subjective inference under uncertainty, balancing logical rigor with statistical interpretability.
📝 Abstract
An agent observes a clue, and an analyst observes an inference: a ranking of events on the basis of how corroborated they are by the clue. We prove that if the inference satisfies the axioms of Villegas (1964) except for the classic qualitative probability axiom of monotonicity, then it has a unique normalized signed measure representation (Theorem 1). Moreover, if the inference also declares the largest event equivalent to the smallest event, then it can be represented as a difference between a posterior and a prior such that the former is the conditional probability of the latter with respect to an assessed event that is interpreted as a clue guess. Across these Bayesian representations, the posterior is unique, all guesses are in a suitable sense equivalent, and the prior is determined by the weight it assigns to each possible guess (Theorem 2). However, observation of a prior and posterior compatible with the inference could reveal that all of these guesses are wrong.