Class Is Invariant to Context and Vice Versa: On Learning Invariance for Out-Of-Distribution Generalization

📅 2022-08-06
🏛️ European Conference on Computer Vision
📈 Citations: 12
Influential: 2
📄 PDF
🤖 AI Summary
In out-of-distribution (OOD) generalization, models suffer from context bias due to imbalanced context distributions across classes. Method: This paper proposes an unsupervised bias estimation and disentanglement framework that requires no context annotations. Its core innovation is the first identification and exploitation of the “context-to-class invariance” principle: leveraging class labels as natural environmental shifts to enforce bidirectional class–context invariance via cross-class invariant contrastive learning. The method integrates environment-disentangled representation learning, contrastive similarity constraints, and a reweighted classifier. Contribution/Results: It achieves state-of-the-art performance on multiple benchmarks involving context bias and domain gaps. A theoretical analysis proves that the approach uniquely identifies the true invariant features. Code and comprehensive analysis are publicly available.
📝 Abstract
Out-Of-Distribution generalization (OOD) is all about learning invariance against environmental changes. If the context in every class is evenly distributed, OOD would be trivial because the context can be easily removed due to an underlying principle: class is invariant to context. However, collecting such a balanced dataset is impractical. Learning on imbalanced data makes the model bias to context and thus hurts OOD. Therefore, the key to OOD is context balance. We argue that the widely adopted assumption in prior work, the context bias can be directly annotated or estimated from biased class prediction, renders the context incomplete or even incorrect. In contrast, we point out the everoverlooked other side of the above principle: context is also invariant to class, which motivates us to consider the classes (which are already labeled) as the varying environments to resolve context bias (without context labels). We implement this idea by minimizing the contrastive loss of intra-class sample similarity while assuring this similarity to be invariant across all classes. On benchmarks with various context biases and domain gaps, we show that a simple re-weighting based classifier equipped with our context estimation achieves state-of-the-art performance. We provide the theoretical justifications in Appendix and codes on https://github.com/simpleshinobu/IRMCon.
Problem

Research questions and friction points this paper is trying to address.

Learning invariance for out-of-distribution generalization without context labels
Addressing context bias in imbalanced datasets through class-based environments
Achieving context balance by leveraging class-context invariance relationships
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses class labels as environments to estimate context bias
Minimizes contrastive loss for invariant intra-class similarity
Achieves state-of-the-art with simple re-weighting classifier
🔎 Similar Papers
No similar papers found.
J
Jiaxin Qi
Nanyang Technological University
Kaihua Tang
Kaihua Tang
Nanyang Technological University
Computer VisionMachine LearningArtificial Intelligence
Q
Qianru Sun
Singapore Management University
X
Xiansheng Hua
Damo Academy, Alibaba Group
H
Hanwang Zhang
Nanyang Technological University