Relax and penalize: a new bilevel approach to mixed-binary hyperparameter optimization

📅 2023-08-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Continuous relaxation and rounding strategies in mixed-binary hyperparameter optimization (MBHO) suffer from solution inconsistency due to the inherent gap between relaxed and discrete solutions. Method: We propose an adaptive-penalty-based continuous bilevel reformulation, the first rigorous equivalence transformation of MBHO into a continuous bilevel problem with $L_0/L_1$-type penalty terms. We theoretically prove that its optimal solutions converge to the exact mixed-binary optimum. The method integrates implicit-function-gradient estimation, group-sparse modeling, and data distillation within a unified framework, ensuring compatibility with mainstream continuous bilevel solvers. Results: Our approach achieves state-of-the-art performance on group-sparse regression structure discovery and data distillation tasks, significantly outperforming existing relaxation-and-rounding methods. Empirical results validate both the theoretical soundness of our formulation and its strong generalization capability in practical settings.
📝 Abstract
In recent years, bilevel approaches have become very popular to efficiently estimate high-dimensional hyperparameters of machine learning models. However, to date, binary parameters are handled by continuous relaxation and rounding strategies, which could lead to inconsistent solutions. In this context, we tackle the challenging optimization of mixed-binary hyperparameters by resorting to an equivalent continuous bilevel reformulation based on an appropriate penalty term. We propose an algorithmic framework that, under suitable assumptions, is guaranteed to provide mixed-binary solutions. Moreover, the generality of the method allows to safely use existing continuous bilevel solvers within the proposed framework. We evaluate the performance of our approach for two specific machine learning problems, i.e., the estimation of the group-sparsity structure in regression problems and the data distillation problem. The reported results show that our method is competitive with state-of-the-art approaches based on relaxation and rounding
Problem

Research questions and friction points this paper is trying to address.

Optimizes mixed-binary hyperparameters in machine learning models
Proposes a continuous bilevel reformulation with penalty terms
Ensures consistent mixed-binary solutions without relaxation issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous bilevel reformulation with penalty term
Algorithmic framework for mixed-binary solutions
Integration with existing continuous bilevel solvers
🔎 Similar Papers
No similar papers found.
M
M. D. Santis
DIAG, Sapienza University of Rome, 00185 Rome, Italy
J
Jordan Frécon
Université Jean Monnet Saint-Etienne, CNRS, Institut d'Optique Graduate School, Laboratoire Hubert Curien UMR 5516, F-42023, Saint-Etienne, France
F
F. Rinaldi
Department of Mathematics, University of Padova, 35121 Padova, Italy
Saverio Salzo
Saverio Salzo
Accociate Professor, Sapienza University of Rome, Italy
OptimizationMachine Learning
Martin Schmidt
Martin Schmidt
Trier University - Department of Mathematics
OptimizationOperations ResearchEnergyBrewingBaking