Testing Noise Assumptions of Learning Algorithms

📅 2025-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the testability of noise assumptions—specifically Massart noise—in halfspace learning under Gaussian distributions: namely, whether training data can be verified in polynomial time to satisfy a given noise model. We establish the first theoretical framework for *testable learning* under noise, integrating statistical hypothesis testing, semi-algebraic geometry, and convex optimization to design the first fully polynomial-time testable algorithm satisfying the “accept-implies-optimal-certification” property. Key contributions are: (1) demonstrating that halfspace learning under Massart noise over Gaussians is efficiently testable; (2) proving that testable learning requires super-polynomial time under random classification noise (η = 1/2), thereby exhibiting a fundamental separation from classical (non-testable) learning; and (3) characterizing the precise theoretical boundary between testable learning and classical noisy learning.

Technology Category

Application Category

📝 Abstract
We pose a fundamental question in computational learning theory: can we efficiently test whether a training set satisfies the assumptions of a given noise model? This question has remained unaddressed despite decades of research on learning in the presence of noise. In this work, we show that this task is tractable and present the first efficient algorithm to test various noise assumptions on the training data. To model this question, we extend the recently proposed testable learning framework of Rubinfeld and Vasilyan (2023) and require a learner to run an associated test that satisfies the following two conditions: (1) whenever the test accepts, the learner outputs a classifier along with a certificate of optimality, and (2) the test must pass for any dataset drawn according to a specified modeling assumption on both the marginal distribution and the noise model. We then consider the problem of learning halfspaces over Gaussian marginals with Massart noise (where each label can be flipped with probability less than $1/2$ depending on the input features), and give a fully-polynomial time testable learning algorithm. We also show a separation between the classical setting of learning in the presence of structured noise and testable learning. In fact, for the simple case of random classification noise (where each label is flipped with fixed probability $eta = 1/2$), we show that testable learning requires super-polynomial time while classical learning is trivial.
Problem

Research questions and friction points this paper is trying to address.

Computational Learning
Noisy Gaussian Data
Halfspace Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Noisy Data Testing
Halfspace Learning
Testable Learning Efficiency
🔎 Similar Papers
No similar papers found.