Scalable First-order Method for Certifying Optimal k-Sparse GLMs

๐Ÿ“… 2025-02-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the computational challenge of certifying global optimality for large-scale $k$-sparse generalized linear models (GLMs) under $ell_0$-cardinality constraints. Existing dual bounding methods suffer from either excessive computational overhead or slow convergence. To overcome this, we propose a novel dual bounding framework that integrates perspective relaxation with a first-order proximal gradient method. Crucially, we design a log-linear-time algorithm that exactly computes the proximal operator for the resulting non-smooth regularizerโ€”a key theoretical and practical advance. We further introduce a low-overhead restart strategy to accelerate convergence. Embedded within a branch-and-bound solver, our method significantly improves dual bound computation efficiency on both synthetic and real-world datasets. The approach enables scalable, high-precision optimality certification for large-scale $k$-sparse GLMs, outperforming state-of-the-art baselines in speed and bound tightness.

Technology Category

Application Category

๐Ÿ“ Abstract
This paper investigates the problem of certifying optimality for sparse generalized linear models (GLMs), where sparsity is enforced through an $ell_0$ cardinality constraint. While branch-and-bound (BnB) frameworks can certify optimality by pruning nodes using dual bounds, existing methods for computing these bounds are either computationally intensive or exhibit slow convergence, limiting their scalability to large-scale problems. To address this challenge, we propose a first-order proximal gradient algorithm designed to solve the perspective relaxation of the problem within a BnB framework. Specifically, we formulate the relaxed problem as a composite optimization problem and demonstrate that the proximal operator of the non-smooth component can be computed exactly in log-linear time complexity, eliminating the need to solve a computationally expensive second-order cone program. Furthermore, we introduce a simple restart strategy that enhances convergence speed while maintaining low per-iteration complexity. Extensive experiments on synthetic and real-world datasets show that our approach significantly accelerates dual bound computations and is highly effective in providing optimality certificates for large-scale problems.
Problem

Research questions and friction points this paper is trying to address.

Certifies optimality for sparse GLMs efficiently.
Addresses scalability in large-scale optimization problems.
Enhances convergence speed with low computational complexity.
Innovation

Methods, ideas, or system contributions that make the work stand out.

First-order proximal gradient algorithm
Log-linear time proximal operator
Simple restart strategy enhancement
๐Ÿ”Ž Similar Papers
No similar papers found.