General Loss Functions Lead to (Approximate) Interpolation in High Dimensions

๐Ÿ“… 2023-03-13
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 5
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates the implicit bias of gradient descent in binary and multiclass classification under high-dimensional overparameterization, for arbitrary convex loss functions. We develop a unified analytical framework that, for the first time, extends asymptotic equivalence of implicit bias beyond exponential-tailed losses to general convex lossesโ€”bypassing the SVM intermediate representation and directly establishing asymptotic equivalence between gradient descent trajectories and the minimum-norm interpolating solution. Building upon Ji & Telgarsky (2021)โ€™s dual analysis, we integrate high-dimensional asymptotics with convex optimization sensitivity theory to rigorously derive closed-form approximate solutions across diverse loss families. Our analysis recovers existing exact equivalence results and, crucially, quantifies for the first time the systematic deviation induced by out-of-distribution (OOD)-oriented losses on the interpolating solution.
๐Ÿ“ Abstract
We provide a unified framework, applicable to a general family of convex losses and across binary and multiclass settings in the overparameterized regime, to approximately characterize the implicit bias of gradient descent in closed form. Specifically, we show that the implicit bias is approximated (but not exactly equal to) the minimum-norm interpolation in high dimensions, which arises from training on the squared loss. In contrast to prior work which was tailored to exponentially-tailed losses and used the intermediate support-vector-machine formulation, our framework directly builds on the primal-dual analysis of Ji and Telgarsky (2021), allowing us to provide new approximate equivalences for general convex losses through a novel sensitivity analysis. Our framework also recovers existing exact equivalence results for exponentially-tailed losses across binary and multiclass settings. Finally, we provide evidence for the tightness of our techniques, which we use to demonstrate the effect of certain loss functions designed for out-of-distribution problems on the closed-form solution.
Problem

Research questions and friction points this paper is trying to address.

Characterize implicit bias of gradient descent for convex losses
Approximate minimum-norm interpolation in high dimensions
Analyze effect of loss functions on closed-form solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for convex losses in overparameterized regime
Approximate implicit bias via primal-dual analysis
Recovers exact equivalence for exponentially-tailed losses
๐Ÿ”Ž Similar Papers
No similar papers found.
K
K. Lai
School of Electrical & Computer Engineering, Georgia Institute of Technology
Vidya Muthukumar
Vidya Muthukumar
Georgia Institute of Technology
machine learning theoryonline decision-makinggame theory