🤖 AI Summary
This paper addresses the challenge of reducing reliance on large-scale labeled data in text-image multimodal analysis through self-supervised contrastive learning. Methodologically, it systematically reviews cross-modal positive/negative sample construction, feature-space alignment mechanisms, and unsupervised representation learning paradigms. It introduces the first taxonomy of vision-language contrastive methods based on model architecture, integrating pretraining objectives, encoder designs (e.g., CLIP, ALIGN), and similarity measurement techniques into a unified analytical framework. The work clarifies the technical evolution trajectory and identifies core bottlenecks—including computational inefficiency, sensitivity to data distribution shifts, and limited interpretability. It further proposes a modeling pathway that jointly optimizes efficiency, robustness, and explainability. The contributions provide both theoretical foundations and practical guidelines for advancing self-supervised multimodal learning, enabling more scalable, generalizable, and transparent joint representation learning across modalities.
📝 Abstract
Self-supervised learning is a machine learning approach that generates implicit labels by learning underlined patterns and extracting discriminative features from unlabeled data without manual labelling. Contrastive learning introduces the concept of"positive"and"negative"samples, where positive pairs (e.g., variation of the same image/object) are brought together in the embedding space, and negative pairs (e.g., views from different images/objects) are pushed farther away. This methodology has shown significant improvements in image understanding and image text analysis without much reliance on labeled data. In this paper, we comprehensively discuss the terminologies, recent developments and applications of contrastive learning with respect to text-image models. Specifically, we provide an overview of the approaches of contrastive learning in text-image models in recent years. Secondly, we categorize the approaches based on different model structures. Thirdly, we further introduce and discuss the latest advances of the techniques used in the process such as pretext tasks for both images and text, architectural structures, and key trends. Lastly, we discuss the recent state-of-art applications of self-supervised contrastive learning Text-Image based models.