🤖 AI Summary
Nonparametric inference for unlabeled histograms over large alphabets remains challenging, as classical methods rely on labeled frequency counts and fail under label ambiguity or absence.
Method: This paper introduces a novel framework modeling histogram multisets via mixture distributions, integrating nonparametric maximum likelihood estimation (NPMLE) into the unlabeled setting for the first time. It combines Poissonization-based modeling with localized plug-in estimation to handle symmetry and sparsity.
Contribution/Results: The method achieves minimax-optimal convergence rates and sample complexity for symmetric functionals—including entropy and support size—matching information-theoretic lower bounds in sparse regimes. Extensive experiments on synthetic data, real-world corpora, and large language model outputs demonstrate superior performance in capturing unseen domain elements and performing sparse histogram inference, consistently outperforming state-of-the-art baselines.
📝 Abstract
Statistical inference on histograms and frequency counts plays a central role in categorical data analysis. Moving beyond classical methods that directly analyze labeled frequencies, we introduce a framework that models the multiset of unlabeled histograms via a mixture distribution to better capture unseen domain elements in large-alphabet regime. We study the nonparametric maximum likelihood estimator (NPMLE) under this framework, and establish its optimal convergence rate under the Poisson setting. The NPMLE also immediately yields flexible and efficient plug-in estimators for functional estimation problems, where a localized variant further achieves the optimal sample complexity for a wide range of symmetric functionals. Extensive experiments on synthetic, real-world datasets, and large language models highlight the practical benefits of the proposed method.