🤖 AI Summary
This study investigates the impact of data diversity—not merely dataset scale—on visual model generalization. Method: We systematically evaluate MAML, multi-step MAML, and supervised pretraining across 12 fine-grained and few-shot vision benchmarks (e.g., Omniglot, CIFAR-FS, Aircraft), and introduce Task2Vec to quantify task-level data diversity—a first in meta-learning analysis. Contribution/Results: We find moderate positive correlation between test accuracy and diversity (R² = 0.15–0.42) and strong negative correlation with loss (R² ≈ 0.2), establishing diversity as a critical latent generalization factor. Crucially, MAML demonstrates significantly greater sensitivity to diversity than supervised pretraining—providing the first empirical evidence of a structural advantage of meta-learning in leveraging data diversity. We advocate Task2Vec-based diversity as a core metric for data evaluation, shifting large-model data assessment from quantity-oriented to quality-oriented paradigms.
📝 Abstract
Currently, data and model size dominate the narrative in the training of super-large, powerful models. However, there has been a lack of exploration on the effect of other attributes of the training dataset on model performance. We hypothesize that dataset diversity can impact the performance of vision models. Our study shows positive correlations between test set accuracy and data diversity, providing an argument for furthering the research of dataset attributes beyond size. We analyzed pre-training and model-agnostic meta-learning methods on twelve popular visual datasets (e.g., Omniglot, CIFAR-FS, Aircraft) and five model configurations, including MAML variants with different numbers of inner gradient steps and supervised learning. We show moderate to strong positive correlations (R-squared: 0.15-0.42) between accuracy and data diversity and weaker but significant correlations (R-squared: ~0.2) between loss and diversity. These findings support our hypothesis and demonstrate a promising way for a deeper exploration of how formal data diversity influences model performance. This initial study highlights the potential of (Task2Vec) data diversity as a valuable measure in the rapidly evolving field of large-scale learning and emphasizes that understanding the dataset is key to building more powerful and generalizable models.