🤖 AI Summary
Long-tailed classification suffers from severe performance degradation on tail classes due to extreme label distribution imbalance; existing approaches predominantly rely on heuristic reweighting or resampling strategies lacking theoretical grounding. This paper pioneers the integration of causal inference into long-tailed learning, revealing SGD momentum as a confounder exerting opposing causal effects: detrimental to tail-class prediction while beneficial to representation learning. We propose a novel momentum-effect decoupling paradigm, enabling theoretically grounded optimization via causal intervention during training and counterfactual reasoning at inference. Our method combines standard CNN architectures with momentum gradient decomposition, requiring no architectural modifications or auxiliary modules. Extensive experiments demonstrate state-of-the-art performance on Long-tailed CIFAR-10/100, ImageNet-LT, and LVIS, with particularly pronounced gains in tail-class accuracy—validating both efficacy and interpretability.
📝 Abstract
As the class size grows, maintaining a balanced dataset across many classes is challenging because the data are long-tailed in nature; it is even impossible when the sample-of-interest co-exists with each other in one collectable unit, e.g., multiple visual instances in one image. Therefore, long-tailed classification is the key to deep learning at scale. However, existing methods are mainly based on re-weighting/re-sampling heuristics that lack a fundamental theory. In this paper, we establish a causal inference framework, which not only unravels the whys of previous methods, but also derives a new principled solution. Specifically, our theory shows that the SGD momentum is essentially a confounder in long-tailed classification. On one hand, it has a harmful causal effect that misleads the tail prediction biased towards the head. On the other hand, its induced mediation also benefits the representation learning and head prediction. Our framework elegantly disentangles the paradoxical effects of the momentum, by pursuing the direct causal effect caused by an input sample. In particular, we use causal intervention in training, and counterfactual reasoning in inference, to remove the "bad" while keep the "good". We achieve new state-of-the-arts on three long-tailed visual recognition benchmarks: Long-tailed CIFAR-10/-100, ImageNet-LT for image classification and LVIS for instance segmentation.