🤖 AI Summary
Conventional class imbalance metrics rely solely on class cardinality, ignoring data redundancy and intrinsic differences in class learnability. Method: This paper introduces intrinsic dimension (ID)—a model-agnostic, unsupervised measure grounded in manifold geometry—to quantify class imbalance by characterizing information density and complexity per class. ID can be used standalone or jointly with cardinality to design more robust reweighting or resampling strategies, all without requiring model training. Contribution/Results: Experiments across five datasets with varying imbalance ratios demonstrate that ID alone significantly outperforms cardinality-based baselines; combining ID with cardinality further enhances classification performance of mainstream imbalance-learning methods. This work establishes a novel theoretical foundation for imbalance quantification and provides a practical, geometry-informed tool for imbalanced learning.
📝 Abstract
Imbalance in classification tasks is commonly quantified by the cardinalities of examples across classes. This, however, disregards the presence of redundant examples and inherent differences in the learning difficulties of classes. Alternatively, one can use complex measures such as training loss and uncertainty, which, however, depend on training a machine learning model. Our paper proposes using data Intrinsic Dimensionality (ID) as an easy-to-compute, model-free measure of imbalance that can be seamlessly incorporated into various imbalance mitigation methods. Our results across five different datasets with a diverse range of imbalance ratios show that ID consistently outperforms cardinality-based re-weighting and re-sampling techniques used in the literature. Moreover, we show that combining ID with cardinality can further improve performance. Code: https://github.com/cagries/IDIM.