Interactive Learning of Single-Index Models via Stochastic Gradient Descent

πŸ“… 2026-02-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work investigates efficient learning in single-index models under adaptive data settings for sequential interactive learning problems, such as generalized linear or ridge bandits. We analyze the dynamics of stochastic gradient descent (SGD) and, for the first time, reveal its two distinct phases: an initial β€œwarm-up” stage followed by a β€œlearning” stage. Leveraging this insight, we design an adaptive learning rate scheduling strategy that applies to a broad class of link functions. Our approach achieves near-optimal sample complexity and cumulative regret bounds within a single SGD run, matching or closely approaching the best-known theoretical guarantees in the literature.

Technology Category

Application Category

πŸ“ Abstract
Stochastic gradient descent (SGD) is a cornerstone algorithm for high-dimensional optimization, renowned for its empirical successes. Recent theoretical advances have provided a deep understanding of how SGD enables feature learning in high-dimensional nonlinear models, most notably the \textit{single-index model} with i.i.d. data. In this work, we study the sequential learning problem for single-index models, also known as generalized linear bandits or ridge bandits, where SGD is a simple and natural solution, yet its learning dynamics remain largely unexplored. We show that, similar to the optimal interactive learner, SGD undergoes a distinct ``burn-in''phase before entering the ``learning''phase in this setting. Moreover, with an appropriately chosen learning rate schedule, a single SGD procedure simultaneously achieves near-optimal (or best-known) sample complexity and regret guarantees across both phases, for a broad class of link functions. Our results demonstrate that SGD remains highly competitive for learning single-index models under adaptive data.
Problem

Research questions and friction points this paper is trying to address.

single-index models
stochastic gradient descent
interactive learning
adaptive data
generalized linear bandits
Innovation

Methods, ideas, or system contributions that make the work stand out.

stochastic gradient descent
single-index model
interactive learning
generalized linear bandits
adaptive data
πŸ”Ž Similar Papers
2024-10-02International Conference on Machine LearningCitations: 1