Online Loss Function Learning

๐Ÿ“… 2023-01-30
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 5
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing offline meta-learning approaches for loss function optimization only tune the meta-objective during the initial few inner-loop iterations, leading to suboptimal final model performance. To address this limitation, we propose an online loss function learning framework that dynamically adapts the loss function after every model parameter updateโ€”enabling, for the first time, fully online, end-to-end adaptive loss optimization. Methodologically, our approach introduces gradient-coupled bilevel optimization, an online meta-update mechanism, and differentiable parametrization of loss functions, thereby overcoming the temporal constraints inherent in offline meta-learning. Extensive experiments across diverse neural architectures and benchmark datasets demonstrate consistent superiority over cross-entropy and state-of-the-art offline loss learning methods, yielding average final-test accuracy improvements of 1.2โ€“2.8%.
๐Ÿ“ Abstract
Loss function learning is a new meta-learning paradigm that aims to automate the essential task of designing a loss function for a machine learning model. Existing techniques for loss function learning have shown promising results, often improving a model's training dynamics and final inference performance. However, a significant limitation of these techniques is that the loss functions are meta-learned in an offline fashion, where the meta-objective only considers the very first few steps of training, which is a significantly shorter time horizon than the one typically used for training deep neural networks. This causes significant bias towards loss functions that perform well at the very start of training but perform poorly at the end of training. To address this issue we propose a new loss function learning technique for adaptively updating the loss function online after each update to the base model parameters. The experimental results show that our proposed method consistently outperforms the cross-entropy loss and offline loss function learning techniques on a diverse range of neural network architectures and datasets.
Problem

Research questions and friction points this paper is trying to address.

Meta-learning adaptive loss functions for machine learning models
Addressing bias in offline loss functions with online adaptation
Improving training dynamics and final inference performance across architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online adaptive loss function updates during training
Meta-learning technique overcoming offline learning limitations
Consistent performance improvement across architectures and datasets
๐Ÿ”Ž Similar Papers
No similar papers found.
Christian Raymond
Christian Raymond
Victoria University of Wellington, Wellington, New Zealand
Q
Qi Chen
Victoria University of Wellington, Wellington, New Zealand
Bing Xue
Bing Xue
Meta Superintelligence Labs
LLMmachine learning for healthcarerepresentation learninggenerative models
M
Mengjie Zhang
Victoria University of Wellington, Wellington, New Zealand