🤖 AI Summary
Self-supervised vision transformer training heavily relies on large-scale real-world data and faces challenges in manually curating hard negative samples. Method: This paper proposes Syn2Co, the first framework to jointly leverage generative models for synthesizing image data and construct synthetic hard negatives directly in the representation space, thereby establishing a more challenging contrastive learning environment—enabling end-to-end self-supervised training on DeiT-S and Swin-T without real labels or explicit hard-negative mining. Contribution/Results: Syn2Co significantly improves feature robustness and cross-task transferability, achieving performance on ImageNet linear evaluation that closely approaches that of fully supervised training on real data. Our work delineates the effective boundary of synthetic data in representation learning and establishes a novel paradigm for reducing dependence on real-world supervision.
📝 Abstract
This paper does not introduce a new method per se. Instead, we build on existing self-supervised learning approaches for vision, drawing inspiration from the adage "fake it till you make it". While contrastive self-supervised learning has achieved remarkable success, it typically relies on vast amounts of real-world data and carefully curated hard negatives. To explore alternatives to these requirements, we investigate two forms of "faking it" in vision transformers. First, we study the potential of generative models for unsupervised representation learning, leveraging synthetic data to augment sample diversity. Second, we examine the feasibility of generating synthetic hard negatives in the representation space, creating diverse and challenging contrasts. Our framework - dubbed Syn2Co - combines both approaches and evaluates whether synthetically enhanced training can lead to more robust and transferable visual representations on DeiT-S and Swin-T architectures. Our findings highlight the promise and limitations of synthetic data in self-supervised learning, offering insights for future work in this direction.