Robustness Beyond Known Groups with Low-rank Adaptation

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor generalization of deep learning models on unknown sensitive subpopulations, a challenge exacerbated by the common reliance of existing methods on subpopulation labels. To overcome this limitation, the authors propose Low-rank Error-aware Adaptation (LEIA), a label-free approach that analyzes the low-dimensional structure of model errors in the representation space and applies low-rank fine-tuning at the logits layer to directly correct latent failure modes. LEIA employs a two-stage optimization strategy that integrates low-rank adaptation with unsupervised subpopulation robustness enhancement, without modifying the backbone network. Evaluated across five real-world datasets, LEIA consistently improves worst-group performance under fully unknown, partially known, and fully known subpopulation settings, while remaining parameter-efficient, computationally lightweight, and insensitive to hyperparameter choices.

Technology Category

Application Category

📝 Abstract
Deep learning models trained to optimize average accuracy often exhibit systematic failures on particular subpopulations. In real world settings, the subpopulations most affected by such disparities are frequently unlabeled or unknown, thereby motivating the development of methods that are performant on sensitive subgroups without being pre-specified. However, existing group-robust methods typically assume prior knowledge of relevant subgroups, using group annotations for training or model selection. We propose Low-rank Error Informed Adaptation (LEIA), a simple two-stage method that improves group robustness by identifying a low-dimensional subspace in the representation space where model errors concentrate. LEIA restricts adaptation to this error-informed subspace via a low-rank adjustment to the classifier logits, directly targeting latent failure modes without modifying the backbone or requiring group labels. Using five real-world datasets, we analyze group robustness under three settings: (1) truly no knowledge of subgroup relevance, (2) partial knowledge of subgroup relevance, and (3) full knowledge of subgroup relevance. Across all settings, LEIA consistently improves worst-group performance while remaining fast, parameter-efficient, and robust to hyperparameter choice.
Problem

Research questions and friction points this paper is trying to address.

group robustness
unknown subgroups
worst-group performance
distributional robustness
subpopulation shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

low-rank adaptation
group robustness
error-informed subspace
worst-group performance
subpopulation shift
🔎 Similar Papers
No similar papers found.
A
Abinitha Gourabathina
Department of Electrical Engineering & Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
Hyewon Jeong
Hyewon Jeong
M.D., Ph.D. Candidate @ EECS, MIT
Machine LearningHealthcareSystems Neuroscience
Teya Bergamaschi
Teya Bergamaschi
PhD Student, Massachusetts Institute of Technology
Machine LearningComputational CardiologyMedical Decision Making
M
Marzyeh Ghassemi
Department of Electrical Engineering & Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
Collin Stultz
Collin Stultz
MIT and MGH
BiophysicsMedicineCardiologyMachine Learning