Efficient Optimization Algorithms for Linear Adversarial Training

📅 2024-10-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generic convex optimizers suffer from poor scalability and inefficiency in adversarial training of linear models for large-scale problems. Method: This paper formulates linear adversarial robust learning as a convex optimization problem and introduces a dedicated solver based on extended-variable reparameterization: iterative ridge regression variants for regression tasks and projected gradient descent variants for classification tasks. Contribution/Results: The proposed approach significantly improves convergence speed and scalability, enabling efficient training on datasets with up to one million samples under rigorous theoretical guarantees. Numerical experiments demonstrate that the algorithms substantially outperform general-purpose convex solvers—including CVX and SCS—in both accuracy and computational speed, while maintaining theoretical soundness and practical deployability.

Technology Category

Application Category

📝 Abstract
Adversarial training can be used to learn models that are robust against perturbations. For linear models, it can be formulated as a convex optimization problem. Compared to methods proposed in the context of deep learning, leveraging the optimization structure allows significantly faster convergence rates. Still, the use of generic convex solvers can be inefficient for large-scale problems. Here, we propose tailored optimization algorithms for the adversarial training of linear models, which render large-scale regression and classification problems more tractable. For regression problems, we propose a family of solvers based on iterative ridge regression and, for classification, a family of solvers based on projected gradient descent. The methods are based on extended variable reformulations of the original problem. We illustrate their efficiency in numerical examples.
Problem

Research questions and friction points this paper is trying to address.

Develop efficient algorithms for adversarial training of linear models.
Address inefficiency of generic convex solvers in large-scale problems.
Propose tailored solvers for regression and classification using optimization reformulations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tailored algorithms for linear adversarial training
Iterative ridge regression for large-scale regression
Projected gradient descent for classification efficiency
🔎 Similar Papers
No similar papers found.