You Only Train Once

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the reliance on manual tuning or exhaustive grid search for loss weights in multi-task learning, this paper proposes the “You Only Train Once” (YOTO) framework, enabling end-to-end joint optimization of model parameters and task-specific loss weights. Methodologically, YOTO treats loss weights as learnable parameters, enforces their non-negativity and unit summation via a softmax transformation, and introduces a uniform-prior-based regularization term to mitigate gradient degradation and constrain the solution space. The formulation is fully differentiable, permitting unified gradient-based optimization. Evaluated on multi-task benchmarks spanning 3D estimation and semantic segmentation, YOTO achieves superior generalization performance and training efficiency—outperforming even the best grid-searched configurations with a single training run. This eliminates the need for costly hyperparameter sweeps while improving robustness and scalability.

Technology Category

Application Category

📝 Abstract
The title of this paper is perhaps an overclaim. Of course, the process of creating and optimizing a learned model inevitably involves multiple training runs which potentially feature different architectural designs, input and output encodings, and losses. However, our method, You Only Train Once (YOTO), indeed contributes to limiting training to one shot for the latter aspect of losses selection and weighting. We achieve this by automatically optimizing loss weight hyperparameters of learned models in one shot via standard gradient-based optimization, treating these hyperparameters as regular parameters of the networks and learning them. To this end, we leverage the differentiability of the composite loss formulation which is widely used for optimizing multiple empirical losses simultaneously and model it as a novel layer which is parameterized with a softmax operation that satisfies the inherent positivity constraints on loss hyperparameters while avoiding degenerate empirical gradients. We complete our joint end-to-end optimization scheme by defining a novel regularization loss on the learned hyperparameters, which models a uniformity prior among the employed losses while ensuring boundedness of the identified optima. We evidence the efficacy of YOTO in jointly optimizing loss hyperparameters and regular model parameters in one shot by comparing it to the commonly used brute-force grid search across state-of-the-art networks solving two key problems in computer vision, i.e. 3D estimation and semantic segmentation, and showing that it consistently outperforms the best grid-search model on unseen test data. Code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Automatically optimizes loss weight hyperparameters in one shot
Eliminates need for brute-force grid search in loss selection
Improves performance on 3D estimation and semantic segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically optimizes loss weight hyperparameters
Treats hyperparameters as regular network parameters
Uses novel regularization loss for uniformity prior
🔎 Similar Papers
No similar papers found.