🤖 AI Summary
Continuous relaxation and rounding strategies in mixed-binary hyperparameter optimization (MBHO) suffer from solution inconsistency due to the inherent gap between relaxed and discrete solutions.
Method: We propose an adaptive-penalty-based continuous bilevel reformulation, the first rigorous equivalence transformation of MBHO into a continuous bilevel problem with $L_0/L_1$-type penalty terms. We theoretically prove that its optimal solutions converge to the exact mixed-binary optimum. The method integrates implicit-function-gradient estimation, group-sparse modeling, and data distillation within a unified framework, ensuring compatibility with mainstream continuous bilevel solvers.
Results: Our approach achieves state-of-the-art performance on group-sparse regression structure discovery and data distillation tasks, significantly outperforming existing relaxation-and-rounding methods. Empirical results validate both the theoretical soundness of our formulation and its strong generalization capability in practical settings.
📝 Abstract
In recent years, bilevel approaches have become very popular to efficiently estimate high-dimensional hyperparameters of machine learning models. However, to date, binary parameters are handled by continuous relaxation and rounding strategies, which could lead to inconsistent solutions. In this context, we tackle the challenging optimization of mixed-binary hyperparameters by resorting to an equivalent continuous bilevel reformulation based on an appropriate penalty term. We propose an algorithmic framework that, under suitable assumptions, is guaranteed to provide mixed-binary solutions. Moreover, the generality of the method allows to safely use existing continuous bilevel solvers within the proposed framework. We evaluate the performance of our approach for two specific machine learning problems, i.e., the estimation of the group-sparsity structure in regression problems and the data distillation problem. The reported results show that our method is competitive with state-of-the-art approaches based on relaxation and rounding