The Principle of Uncertain Maximum Entropy

📅 2023-05-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Classical maximum entropy (MaxEnt) principles assume perfectly accurate input information, rendering them fragile under observational noise and uncertainty. This work introduces the *Uncertain Maximum Entropy Principle*, the first framework to endogenously incorporate observation uncertainty into the MaxEnt formalism. It establishes a strictly convex generalized optimization model—eliminating heuristic relaxations—thereby guaranteeing existence, uniqueness, and physical interpretability of solutions. Methodologically, the approach integrates convex approximation modeling, an Expectation-Maximization (EM) algorithm, and probabilistic constraint optimization, supported by rigorous theoretical analysis. Experiments demonstrate that the proposed method significantly outperforms baseline approaches under noisy conditions, achieving both high estimation accuracy and strong robustness. By unifying uncertainty quantification with entropy-based inference, it provides a novel, interpretable, and verifiable paradigm for distribution estimation under imperfect information.
📝 Abstract
The principle of maximum entropy is a well-established technique for choosing a distribution that matches available information while minimizing bias. It finds broad use across scientific disciplines and in machine learning. However, the principle as defined by is susceptible to noise and error in observations. This forces real-world practitioners to use relaxed versions of the principle in an ad hoc way, negatively impacting interpretation. To address this situation, we present a new principle we call uncertain maximum entropy that generalizes the classic principle and provides interpretable solutions irrespective of the observational methods in use. We introduce a convex approximation and expectation-maximization based algorithm for finding solutions to our new principle. Finally, we contrast this new technique with two simpler generally applicable solutions theoretically and experimentally show our technique provides superior accuracy.
Problem

Research questions and friction points this paper is trying to address.

Extends maximum entropy principle to handle uncertain information inputs
Models information uncertainty using memoryless communication channel framework
Provides entropy bounds and algorithm for distribution estimation with errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relaxed error-free requirement using communication channel
Provided upper bound on unknown distribution entropy
Developed simple algorithm with limited sample approximation
🔎 Similar Papers
No similar papers found.
K
K. Bogert
Department of Computer Science, University of North Carolina Asheville, 1 University Heights, Asheville, NC 28801, USA
M
Matthew Kothe
Department of Computer Science, University of North Carolina Asheville, 1 University Heights, Asheville, NC 28801, USA