🤖 AI Summary
This work addresses three key limitations of grid search for regularization hyperparameter tuning in transfer learning: high computational cost, reduced training data due to validation set allocation, and reliance on manually specified candidate values. We propose a validation-free Bayesian hyperparameter learning method. Its core innovation is the first introduction of a data-weighted evidence lower bound (ELBo) as the model selection objective—emphasizing the data likelihood term in optimization while preserving theoretical validity of the evidence lower bound. Integrated with variational inference, our approach jointly optimizes both model parameters and regularization strength over the entire training dataset. Evaluated on multiple image classification benchmarks, the method achieves test accuracy comparable to exhaustive grid search, significantly reduces training time, and eliminates the need for a held-out validation set.
📝 Abstract
A number of popular transfer learning methods rely on grid search to select regularization hyperparameters that control over-fitting. This grid search requirement has several key disadvantages: the search is computationally expensive, requires carving out a validation set that reduces the size of available data for model training, and requires practitioners to specify candidate values. In this paper, we propose an alternative to grid search: directly learning regularization hyperparameters on the full training set via model selection techniques based on the evidence lower bound ("ELBo") objective from variational methods. For deep neural networks with millions of parameters, we specifically recommend a modified ELBo that upweights the influence of the data likelihood relative to the prior while remaining a valid bound on the evidence for Bayesian model selection. Our proposed technique overcomes all three disadvantages of grid search. We demonstrate effectiveness on image classification tasks on several datasets, yielding heldout accuracy comparable to existing approaches with far less compute time.