A Framework for Bounding Deterministic Risk with PAC-Bayes: Applications to Majority Votes

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional PAC-Bayesian frameworks provide only expected risk guarantees for randomized hypotheses, limiting their direct applicability to real-world deployment scenarios requiring a single deterministic classifier. Method: We propose the first general-purpose framework that systematically transforms randomized PAC-Bayesian bounds into provable risk upper bounds for a single deterministic hypothesis. Our approach unifies PAC-Bayesian theory, numerical optimization, and statistical learning to derive computationally tractable deterministic generalization bounds. Crucially, we establish a unified oracle bound and specialize it to weighted majority voting, significantly improving bound tightness. Contribution/Results: Theoretical analysis yields tighter, certifiable risk bounds for deterministic classifiers. Empirically, our method consistently outperforms state-of-the-art baselines across multiple benchmarks, achieving up to a two-fold improvement in bound tightness for deterministic risk. This work bridges the gap between PAC-Bayesian theory and practical deterministic model deployment, providing both theoretical foundations and an effective computational tool.

Technology Category

Application Category

📝 Abstract
PAC-Bayes is a popular and efficient framework for obtaining generalization guarantees in situations involving uncountable hypothesis spaces. Unfortunately, in its classical formulation, it only provides guarantees on the expected risk of a randomly sampled hypothesis. This requires stochastic predictions at test time, making PAC-Bayes unusable in many practical situations where a single deterministic hypothesis must be deployed. We propose a unified framework to extract guarantees holding for a single hypothesis from stochastic PAC-Bayesian guarantees. We present a general oracle bound and derive from it a numerical bound and a specialization to majority vote. We empirically show that our approach consistently outperforms popular baselines (by up to a factor of 2) when it comes to generalization bounds on deterministic classifiers.
Problem

Research questions and friction points this paper is trying to address.

Extracting deterministic risk bounds from stochastic PAC-Bayesian guarantees
Providing generalization guarantees for single deterministic hypotheses
Improving generalization bounds for majority vote classifiers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts deterministic guarantees from stochastic PAC-Bayesian bounds
Provides oracle bound specialized for majority vote classifiers
Empirically improves generalization bounds for deterministic classifiers
🔎 Similar Papers
No similar papers found.