Faster Algorithms for Structured Linear and Kernel Support Vector Machines

📅 2023-07-15
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the inefficiency of quadratic programming solvers for structured SVMs—both linear and Gaussian-kernel variants—whose standard algorithms incur Θ(n²) time complexity. We propose the first algorithm breaking the Ω(n²) lower bound, achieving near-linear runtime. Our method integrates low-rank matrix decomposition, fast matrix multiplication, structured convex optimization, and fine-grained complexity analysis under the Strong Exponential Time Hypothesis (SETH), leveraging the inherent low rank (≤ d) of the SVM objective matrix and sparsity of linear constraints (O(1) per example). Theoretical contributions include: (1) an Õ(nd^{(ω+1)/2} log(1/ε))-time solver for linear SVMs; (2) an O(n^{1+o(1)} log(1/ε))-time solver for Gaussian-kernel SVMs when d = O(log n) and data radius is bounded; and (3) the first tight theoretical characterization linking data radius to computational lower bounds.
📝 Abstract
Quadratic programming is a ubiquitous prototype in convex programming. Many machine learning problems can be formulated as quadratic programming, including the famous Support Vector Machines (SVMs). Linear and kernel SVMs have been among the most popular models in machine learning over the past three decades, prior to the deep learning era. Generally, a quadratic program has an input size of $Theta(n^2)$, where $n$ is the number of variables. Assuming the Strong Exponential Time Hypothesis ($ extsf{SETH}$), it is known that no $O(n^{2-o(1)})$ time algorithm exists when the quadratic objective matrix is positive semidefinite (Backurs, Indyk, and Schmidt, NeurIPS'17). However, problems such as SVMs usually admit much smaller input sizes: one is given $n$ data points, each of dimension $d$, and $d$ is oftentimes much smaller than $n$. Furthermore, the SVM program has only $O(1)$ equality linear constraints. This suggests that faster algorithms are feasible, provided the program exhibits certain structures. In this work, we design the first nearly-linear time algorithm for solving quadratic programs whenever the quadratic objective admits a low-rank factorization, and the number of linear constraints is small. Consequently, we obtain results for SVMs: * For linear SVM when the input data is $d$-dimensional, our algorithm runs in time $widetilde O(nd^{(omega+1)/2}log(1/epsilon))$ where $omegaapprox 2.37$ is the fast matrix multiplication exponent; * For Gaussian kernel SVM, when the data dimension $d = {color{black}O(log n)}$ and the squared dataset radius is sub-logarithmic in $n$, our algorithm runs in time $O(n^{1+o(1)}log(1/epsilon))$. We also prove that when the squared dataset radius is at least $Omega(log^2 n)$, then $Omega(n^{2-o(1)})$ time is required. This improves upon the prior best lower bound in both the dimension $d$ and the squared dataset radius.
Problem

Research questions and friction points this paper is trying to address.

Develops faster algorithms for structured SVMs
Focuses on low-rank quadratic objective factorization
Improves time complexity for linear and kernel SVMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nearly-linear time algorithm
Low-rank factorization
Small linear constraints
🔎 Similar Papers
No similar papers found.
Yuzhou Gu
Yuzhou Gu
New York University
Z
Zhao Song
Adobe Research
L
Licheng Zhang
Massachusetts Institute of Technology