Self-supervised visual learning in the low-data regime: a comparative evaluation

📅 2024-04-26
🏛️ Neurocomputing
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates few-shot visual learning under extreme label scarcity (<1% ImageNet labels), systematically evaluating the generalization and robustness of self-supervised learning (SSL) methods. Within a unified benchmark, it conducts the first horizontal comparison of prominent SSL frameworks—including SimCLR, BYOL, DINO, and MAE—under linear probing, fine-tuning, and semi-supervised transfer protocols. Methodologically, it rigorously controls for architecture, data augmentation, and downstream evaluation to isolate SSL-specific effects. Key findings reveal that contrastive approaches significantly outperform generative ones; teacher-student-based methods exhibit superior resilience to label scarcity, yielding up to +12.3% absolute improvement in downstream classification accuracy. The work empirically delineates performance boundaries and failure modes of SSL under ultra-low-data regimes, uncovering critical trade-offs between representation quality and label efficiency. These results provide evidence-based guidance for SSL method selection and architectural design in few-shot settings, offering novel insights into the interplay between self-supervision paradigms and data-limited generalization.

Technology Category

Application Category

Problem

Research questions and friction points this paper is trying to address.

Self-Supervised Learning
Effectiveness Evaluation
Limited Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Supervised Learning
Limited Data
Domain-specific Performance
S
Sotirios Konstantakos
Department of Informatics and Telematics, Harokopio University of Athens, Athens, Greece
D
Despina Ioanna Chalkiadaki
Department of Informatics and Telematics, Harokopio University of Athens, Athens, Greece
Ioannis Mademlis
Ioannis Mademlis
Department of Informatics and Telematics, Harokopio University of Athens, Athens, Greece
Yuki M. Asano
Yuki M. Asano
Full Professor, Head of FunAI Lab, University of Technology Nuremberg
Deep LearningMultimodal LearningSelf-supervised LearningLarge Model AdaptationLLMs
E
E. Gavves
QUVA Lab, University of Amsterdam, Amsterdam, Netherlands
Georgios Papadopoulos
Georgios Papadopoulos
PhD candindate Imperial College London
Causal InferenceTime SeriesMachine LearningBiostatisticsGaussian Processes