🤖 AI Summary
This work addresses three pervasive challenges in real-world data: long-tailed class distribution, label scarcity, and domain shift. Methodologically, we propose a robust deep learning framework for imperfect data comprising: (1) a generative debiasing mechanism to mitigate class imbalance; (2) inductive regularization tailored to tail classes to improve few-shot generalization; (3) a metric-driven semi-supervised learning strategy to boost critical performance metrics; and (4) a low-resource domain adaptation framework enabling cross-domain few- and zero-shot transfer. Extensive experiments demonstrate substantial improvements in both accuracy and fairness under long-tailed and cross-domain settings. Our approach achieves state-of-the-art results across multiple benchmarks, offering a systematic solution for efficient modeling of limited, non-stationary real-world data.
📝 Abstract
The distribution of data in the world (eg, internet, etc.) significantly differs from the well-curated datasets and is often over-populated with samples from common categories. The algorithms designed for well-curated datasets perform suboptimally when used for learning from imperfect datasets with long-tailed imbalances and distribution shifts. To expand the use of deep models, it is essential to overcome the labor-intensive curation process by developing robust algorithms that can learn from diverse, real-world data distributions. Toward this goal, we develop practical algorithms for Deep Neural Networks which can learn from limited and imperfect data present in the real world. This thesis is divided into four segments, each covering a scenario of learning from limited or imperfect data. The first part of the thesis focuses on Learning Generative Models from Long-Tail Data, where we mitigate the mode-collapse and enable diverse aesthetic image generations for tail (minority) classes. In the second part, we enable effective generalization on tail classes through Inductive Regularization schemes, which allow tail classes to generalize as effectively as the head classes without requiring explicit generation of images. In the third part, we develop algorithms for Optimizing Relevant Metrics for learning from long-tailed data with limited annotation (semi-supervised), followed by the fourth part, which focuses on the Efficient Domain Adaptation of the model to various domains with very few to zero labeled samples.