🤖 AI Summary
Learning-based optimizers suffer from prohibitive computational demands—e.g., 4000 TPU-months—rendering them impractical for real-world deployment.
Method: This paper proposes Celo, a lightweight and efficient meta-learned optimizer framework. Celo decouples architectural design from meta-training strategy, incorporating task embeddings, LSTM- or Transformer-parameterized update rules, diversity-aware meta-training, and a standardized evaluation protocol to enhance cross-task generalization.
Contribution/Results: Trained in just 24 GPU-hours, Celo achieves superior out-of-distribution generalization compared to tuned AdamW, Lion, and VeLO. It attains a >10⁴× reduction in training cost while significantly advancing meta-generalization performance—marking the first demonstration of simultaneous high generalization capability and computational efficiency in learned optimization.
📝 Abstract
Learned optimization has emerged as a promising alternative to hand-crafted optimizers, with the potential to discover stronger learned update rules that enable faster, hyperparameter-free training of neural networks. A critical element for practically useful learned optimizers, that can be used off-the-shelf after meta-training, is strong meta-generalization: the ability to apply the optimizers to new tasks. Recent state-of-the-art work in learned optimizers, VeLO (Metz et al., 2022), requires a large number of highly diverse meta-training tasks along with massive computational resources, 4000 TPU months, to achieve meta-generalization. This makes further improvements to such learned optimizers impractical. In this work, we identify several key elements in learned optimizer architectures and meta-training procedures that can lead to strong meta-generalization. We also propose evaluation metrics to reliably assess quantitative performance of an optimizer at scale on a set of evaluation tasks. Our proposed approach, Celo, makes a significant leap in improving the meta-generalization performance of learned optimizers and also outperforms tuned state-of-the-art optimizers on a diverse set of out-of-distribution tasks, despite being meta-trained for just 24 GPU hours.