Mixture of Balanced Information Bottlenecks for Long-Tailed Visual Recognition

📅 2025-09-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the co-occurring representation degradation and classification bias in visual recognition under long-tailed distributions, this paper proposes an information bottleneck–based balanced learning framework. Methodologically, it introduces (1) a multi-branch information bottleneck (MBIB) network that jointly leverages shallow fine-grained details and deep semantic features to enhance discriminative representation for tail classes; and (2) a synergistic mechanism combining loss reweighting and self-distillation to dynamically balance information compression and preservation, thereby jointly optimizing representation learning and classifier calibration. The framework is end-to-end trainable without requiring auxiliary annotations or preprocessing. Evaluated on three standard long-tailed benchmarks—CIFAR100-LT, ImageNet-LT, and iNaturalist 2018—it achieves state-of-the-art performance, significantly improving tail-class accuracy while maintaining head-class stability.

Technology Category

Application Category

📝 Abstract
Deep neural networks (DNNs) have achieved significant success in various applications with large-scale and balanced data. However, data in real-world visual recognition are usually long-tailed, bringing challenges to efficient training and deployment of DNNs. Information bottleneck (IB) is an elegant approach for representation learning. In this paper, we propose a balanced information bottleneck (BIB) approach, in which loss function re-balancing and self-distillation techniques are integrated into the original IB network. BIB is thus capable of learning a sufficient representation with essential label-related information fully preserved for long-tailed visual recognition. To further enhance the representation learning capability, we also propose a novel structure of mixture of multiple balanced information bottlenecks (MBIB), where different BIBs are responsible for combining knowledge from different network layers. MBIB facilitates an end-to-end learning strategy that trains representation and classification simultaneously from an information theory perspective. We conduct experiments on commonly used long-tailed datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018. Both BIB and MBIB reach state-of-the-art performance for long-tailed visual recognition.
Problem

Research questions and friction points this paper is trying to address.

Addressing long-tailed data challenges in visual recognition
Integrating loss re-balancing and self-distillation into information bottleneck
Simultaneously training representation and classification via information theory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Balanced Information Bottleneck with re-balancing
Mixture of multiple BIBs combining layer knowledge
End-to-end information theory training strategy
🔎 Similar Papers
No similar papers found.