Replacing Labeled Real-image Datasets with Auto-generated Contours

📅 2022-06-01
🏛️ Computer Vision and Pattern Recognition
📈 Citations: 32
Influential: 2
📄 PDF
🤖 AI Summary
Pretraining vision transformers (ViTs) typically relies on large-scale real-image datasets, raising concerns regarding data privacy, environmental cost, and annotation effort. Method: We propose Formula-Driven Supervised Learning (FDSL), the first framework enabling ViT pretraining exclusively on synthetically generated contour images—without real images, human annotations, or self-supervision. Contours are procedurally generated via mathematical formulas, yielding controllable complexity, zero bias, zero cost, and zero privacy risk. Contribution/Results: We demonstrate that contour structures alone encode sufficient semantic information for effective representation learning, and that moderately increasing pretraining task difficulty improves transfer performance. A ViT-Base pretrained via FDSL achieves 82.7% top-1 accuracy on ImageNet-1K fine-tuning—surpassing the ImageNet-21K baseline (81.8%). This establishes that purely synthetic contour data can match—or even exceed—the efficacy of large-scale real-image pretraining, opening a new pathway toward green, trustworthy, and interpretable vision foundation models.
📝 Abstract
In the present work, we show that the performance of formula-driven supervised learning (FDSL) can match or even exceed that of ImageNet-21k without the use of real images, human-, and self-supervision during the pre-training of Vision Transformers (ViTs). For example, ViT-Base pre-trained on ImageNet-21k shows 81.8% top-1 accuracy when fine-tuned on ImageNet-1k and FDSL shows 82.7% top-1 accuracy when pre-trained under the same conditions (number of images, hyperparameters, and number of epochs). Images generated by formulas avoid the privacy/copyright issues, labeling cost and errors, and biases that real images suffer from, and thus have tremendous potential for pre-training general models. To understand the performance of the synthetic images, we tested two hypotheses, namely (i) object contours are what matter in FDSL datasets and (ii) increased number of parameters to create labels affects performance improvement in FDSL pre-training. To test the former hypothesis, we constructed a dataset that consisted of simple object contour combinations. We found that this dataset can match the performance of fractals. For the latter hypothesis, we found that increasing the difficulty of the pre-training task generally leads to better fine-tuning accuracy.
Problem

Research questions and friction points this paper is trying to address.

Pre-train vision transformers using synthetic images without real data
Address privacy, copyright, and bias issues in image datasets
Investigate key factors for effective formula-driven supervised learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using formula-generated synthetic images for pre-training
Eliminating need for real images and human supervision
Achieving comparable accuracy with fewer images and parameters
🔎 Similar Papers
No similar papers found.
Hirokatsu Kataoka
Hirokatsu Kataoka
AIST / University of Oxford
Computer VisionAction RecognitionAction PredictionVisual Pre-trainingFDSL
R
Ryo Hayamizu
National Institute of Advanced Industrial Science and Technology (AIST)
R
Ryosuke Yamada
National Institute of Advanced Industrial Science and Technology (AIST)
Kodai Nakashima
Kodai Nakashima
NEC, Biometrics Research Laboratory
S
Sora Takashima
National Institute of Advanced Industrial Science and Technology (AIST) and Institute of Science Tokyo
X
Xinyu Zhang
National Institute of Advanced Industrial Science and Technology (AIST) and Institute of Science Tokyo
E
E. J. Martinez-Noriega
National Institute of Advanced Industrial Science and Technology (AIST) and Institute of Science Tokyo
N
Nakamasa Inoue
National Institute of Advanced Industrial Science and Technology (AIST) and Institute of Science Tokyo
Rio Yokota
Rio Yokota
Professor, Institute of Science Tokyo
high performance computinglarge scale deep learninghierarchical low-rank matricesGPU computing