🤖 AI Summary
This work investigates whether vision transformers (ViTs) can be pre-trained on purely algorithmic, non-visual, and semantically void sequence data to improve data efficiency and convergence speed on downstream image tasks. We propose a patch-embedding-free pre-training paradigm that employs formal grammars to generate synthetic, non-image, non-semantic algorithmic sequences as input—enabling the model to acquire generic computational priors prior to any image-based training. This constitutes the first explicit injection of cross-modal inductive bias into ViTs without relying on real or synthetic visual data. When applied to ImageNet-1k with only 1% of the standard training budget for programmatic pre-training, our method yields a fine-tuning accuracy gain exceeding 1.7%, equivalent to augmenting the training set with an additional 28% of real images. The approach consistently enhances performance across diverse downstream vision tasks.
📝 Abstract
Transformers show remarkable versatility across domains, suggesting the existence of inductive biases beneficial across modalities. In this work, we explore a new way to instil such generic biases in vision transformers (ViTs) by pretraining on procedurally-generated data devoid of visual or semantic content. We generate this data with simple algorithms such as formal grammars, so the results bear no relationship to either natural or synthetic images. We use this procedurally-generated data to pretrain ViTs in a warm-up phase that bypasses their visual patch embedding mechanisms, thus encouraging the models to internalise abstract computational priors. When followed by standard image-based training, this warm-up significantly improves data efficiency, convergence speed, and downstream performance. On ImageNet-1k for example, allocating just 1% of the training budget to procedural data improves final accuracy by over 1.7%. In terms of its effect on performance, 1% procedurally generated data is thus equivalent to 28% of the ImageNet-1k data. These findings suggest a promising path toward new data-efficient and domain-agnostic pretraining strategies.