🤖 AI Summary
This paper studies quasi-self-concordant optimization problems with linear constraints $Nx = v$: $min_x sum_{i=1}^n f((Ax - b)_i)$, where $f$ is quasi-self-concordant, $A in mathbb{R}^{n imes d}$, $N in mathbb{R}^{m imes d}$, and $n ge d$. We propose a novel trust-region framework that introduces $ell_infty$ Lewis weights to construct approximate Newton steps—reducing the number of linear system solves per iteration from $ ilde{O}(n^{1/3})$ to $ ilde{O}(d^{1/3})$. Each trust-region subproblem is solved via a lightweight IRLS method for overdetermined $ell_infty$ regression. The algorithm returns a $(1+varepsilon)$-approximate solution with theoretical complexity substantially improving upon prior approaches. Empirical evaluation demonstrates speedups of several orders of magnitude over CVX on large-scale regression tasks, confirming both high efficiency and strong scalability.
📝 Abstract
In this paper, we study the problem $min_{xin mathbb{R}^{d},Nx=v}sum_{i=1}^{n}f((Ax-b)_{i})$ for a quasi-self-concordant function $f:mathbb{R} omathbb{R}$, where $A,N$ are $n imes d$ and $m imes d$ matrices, $b,v$ are vectors of length $n$ and $m$ with $nge d.$ We show an algorithm based on a trust-region method with an oracle that can be implemented using $widetilde{O}(d^{1/3})$ linear system solves, improving the $widetilde{O}(n^{1/3})$ oracle by {[}Adil-Bullins-Sachdeva, NeurIPS 2021{]}. Our implementation of the oracle relies on solving the overdetermined $ell_{infty}$-regression problem $min_{xinmathbb{R}^{d},Nx=v}|Ax-b|_{infty}$. We provide an algorithm that finds a $(1+ε)$-approximate solution to this problem using $O((d^{1/3}/ε+1/ε^{2})log(n/ε))$ linear system solves. This algorithm leverages $ell_{infty}$ Lewis weight overestimates and achieves this iteration complexity via a simple lightweight IRLS approach, inspired by the work of {[}Ene-Vladu, ICML 2019{]}. Experimentally, we demonstrate that our algorithm significantly improves the runtime of the standard CVX solver.