🤖 AI Summary
CLIP models typically require massive, diverse datasets for training, making it challenging to construct high-quality, domain-specific models efficiently. Method: This paper proposes a knowledge graph–enhanced intelligent web crawling framework to automatically build small-scale, high-precision domain-specific datasets. It introduces a novel knowledge graph–driven semantic crawling paradigm that integrates knowledge reasoning, semantic search optimization, and cross-modal alignment. Contribution/Results: The framework constructs EntityNet—a domain-specific dataset comprising 33M images and 46M texts—and trains a professional-grade biomedical CLIP model using only 10M images—marking the first such achievement in the domain. Compared to large-scale baseline models, our approach significantly reduces data and computational requirements while matching or exceeding their performance in fine-grained domains like biomedicine. Training efficiency improves by multiple-fold, establishing a new paradigm for controllable, interpretable, and cost-effective domain-adapted CLIP modeling.
📝 Abstract
Training high-quality CLIP models typically requires enormous datasets, which limits the development of domain-specific models -- especially in areas that even the largest CLIP models do not cover well -- and drives up training costs. This poses challenges for scientific research that needs fine-grained control over the training procedure of CLIP models. In this work, we show that by employing smart web search strategies enhanced with knowledge graphs, a robust CLIP model can be trained from scratch with considerably less data. Specifically, we demonstrate that an expert foundation model for living organisms can be built using just 10M images. Moreover, we introduce EntityNet, a dataset comprising 33M images paired with 46M text descriptions, which enables the training of a generic CLIP model in significantly reduced time.