Robustly Learning Monotone Single-Index Models

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies robust learning of monotone single-index models (SIMs) under adversarial label noise with Gaussian input distributions, aiming to minimize squared loss. Existing methods are restricted to specific activation functions—e.g., identity or logistic—limiting their applicability. To overcome this, we propose the first efficient robust algorithm applicable to *any* bounded, monotone activation function possessing a finite (2+ζ)-th moment. Our core innovation is a novel gradient-based framework that jointly leverages the geometric structure of Gaussian space, moment constraints inherent to monotone functions, and adversarial learning principles—thereby eliminating strong structural assumptions on the activation function’s analytic form. We establish that the algorithm achieves constant-factor approximation error under mild distributional assumptions, significantly broadening theoretical and practical applicability while retaining linear time complexity in the sample size.

Technology Category

Application Category

📝 Abstract
We consider the basic problem of learning Single-Index Models with respect to the square loss under the Gaussian distribution in the presence of adversarial label noise. Our main contribution is the first computationally efficient algorithm for this learning task, achieving a constant factor approximation, that succeeds for the class of {em all} monotone activations with bounded moment of order $2 + ζ,$ for $ζ> 0.$ This class in particular includes all monotone Lipschitz functions and even discontinuous functions like (possibly biased) halfspaces. Prior work for the case of unknown activation either does not attain constant factor approximation or succeeds for a substantially smaller family of activations. The main conceptual novelty of our approach lies in developing an optimization framework that steps outside the boundaries of usual gradient methods and instead identifies a useful vector field to guide the algorithm updates by directly leveraging the problem structure, properties of Gaussian spaces, and regularity of monotone functions.
Problem

Research questions and friction points this paper is trying to address.

Learning Single-Index Models with adversarial label noise
Efficient algorithm for monotone activations with bounded moments
Overcoming limitations of gradient methods via novel optimization framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient algorithm for adversarial label noise
Optimization framework beyond gradient methods
Leverages Gaussian space and monotonicity properties
🔎 Similar Papers
No similar papers found.