Thumb on the Scale: Optimal Loss Weighting in Last Layer Retraining

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address class performance imbalance under limited and non-i.i.d. data in last-layer retraining (LLR), this work focuses on the realistic moderate-parameterization regime—where models are neither under- nor over-parameterized relative to the task. We propose a parameterization-aware loss weighting strategy, theoretically establishing that optimal class weights must explicitly depend on the model’s relative over-parameterization level with respect to the task. Empirical evaluation across multiple benchmarks demonstrates substantial improvements in class balance, including up to a 12.7% gain in minority-class accuracy. This is the first systematic study to uncover the mechanistic basis for loss weighting efficacy in the moderate-parameterization regime, bridging a critical theoretical gap between classical under- and over-parameterized assumptions. Our approach yields an interpretable, resource-efficient, and deployable optimization paradigm for fair fine-tuning under practical data and compute constraints.

Technology Category

Application Category

📝 Abstract
While machine learning models become more capable in discriminative tasks at scale, their ability to overcome biases introduced by training data has come under increasing scrutiny. Previous results suggest that there are two extremes of parameterization with very different behaviors: the population (underparameterized) setting where loss weighting is optimal and the separable overparameterized setting where loss weighting is ineffective at ensuring equal performance across classes. This work explores the regime of last layer retraining (LLR) in which the unseen limited (retraining) data is frequently inseparable and the model proportionately sized, falling between the two aforementioned extremes. We show, in theory and practice, that loss weighting is still effective in this regime, but that these weights emph{must} take into account the relative overparameterization of the model.
Problem

Research questions and friction points this paper is trying to address.

Explores loss weighting effectiveness in last layer retraining
Addresses biases in training data for machine learning models
Balances model performance across classes in intermediate regimes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal loss weighting in last layer retraining
Balances underparameterized and overparameterized extremes
Accounts for model overparameterization in weights
🔎 Similar Papers
No similar papers found.