Efficient Global Optimization of Two-layer ReLU Networks: Quadratic-time Algorithms and Adversarial Training

📅 2022-01-06
🏛️ SIAM Journal on Mathematics of Data Science
📈 Citations: 17
Influential: 1
📄 PDF
🤖 AI Summary
Training two-layer ReLU networks faces challenges including non-convex optimization prone to local minima, sensitivity to initialization and hyperparameters, and existing convexification methods suffering from prohibitive computational complexity—exponential or cubic in the number of neurons. Method: We propose the first efficient algorithm that simultaneously guarantees global convergence and achieves quadratic time complexity. Our approach integrates a dual-path strategy combining the Alternating Direction Method of Multipliers (ADMM) with sampling-based convex programming to construct a scalable approximate convex solver. Theoretically, we establish the first provably robust convex adversarial training framework. Results: Experiments demonstrate linear global convergence, with high accuracy attained already in the first iteration. The method significantly improves generalization and robustness under both standard and adversarial training settings, while closely approximating the global optimum.
📝 Abstract
The non-convexity of the artificial neural network (ANN) training landscape brings inherent optimization difficulties. While the traditional back-propagation stochastic gradient descent (SGD) algorithm and its variants are effective in certain cases, they can become stuck at spurious local minima and are sensitive to initializations and hyperparameters. Recent work has shown that the training of an ANN with ReLU activations can be reformulated as a convex program, bringing hope to globally optimizing interpretable ANNs. However, naively solving the convex training formulation has an exponential complexity, and even an approximation heuristic requires cubic time. In this work, we characterize the quality of this approximation and develop two efficient algorithms that train ANNs with global convergence guarantees. The first algorithm is based on the alternating direction method of multiplier (ADMM). It solves both the exact convex formulation and the approximate counterpart. Linear global convergence is achieved, and the initial several iterations often yield a solution with high prediction accuracy. When solving the approximate formulation, the per-iteration time complexity is quadratic. The second algorithm, based on the"sampled convex programs"theory, is simpler to implement. It solves unconstrained convex formulations and converges to an approximately globally optimal classifier. The non-convexity of the ANN training landscape exacerbates when adversarial training is considered. We apply the robust convex optimization theory to convex training and develop convex formulations that train ANNs robust to adversarial inputs. Our analysis explicitly focuses on one-hidden-layer fully connected ANNs, but can extend to more sophisticated architectures.
Problem

Research questions and friction points this paper is trying to address.

Overcoming non-convexity in ANN training for global optimization
Reducing exponential complexity in convex training formulations
Enhancing adversarial robustness via convex optimization in ANNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

ADMM for quadratic-time convex training
Sampled convex programs for global convergence
Robust convex optimization for adversarial training