🤖 AI Summary
This paper addresses the problem of constructing universal confidence sets under non-i.i.d. data, model misspecification, and arbitrary realizable likelihood families. We propose a unified framework based on sequential likelihood mixing—a novel integration of sequential analysis, Bayesian inference, and online estimation regret theory—guaranteeing anytime validity and rigorous coverage. The framework accommodates approximate inference methods (e.g., variational inference and sampling) without compromising theoretical guarantees. In sequential linear regression and sparse estimation, our derived confidence sequences are tighter, with significantly simplified proofs and uniformly controlled coverage probabilities throughout the learning process. Our core contribution lies in jointly handling model uncertainty and data dependence, thereby achieving a principled balance among universality, robustness, and computational feasibility.
📝 Abstract
We present a universal framework for constructing confidence sets based on sequential likelihood mixing. Building upon classical results from sequential analysis, we provide a unifying perspective on several recent lines of work, and establish fundamental connections between sequential mixing, Bayesian inference and regret inequalities from online estimation. The framework applies to any realizable family of likelihood functions and allows for non-i.i.d. data and anytime validity. Moreover, the framework seamlessly integrates standard approximate inference techniques, such as variational inference and sampling-based methods, and extends to misspecified model classes, while preserving provable coverage guarantees. We illustrate the power of the framework by deriving tighter confidence sequences for classical settings, including sequential linear regression and sparse estimation, with simplified proofs.