Visual Pre-Training on Unlabeled Images using Reinforcement Learning

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper introduces the first framework that formulates self-supervised visual pretraining as a reinforcement learning (RL) problem over large-scale unlabeled image data—such as web-crawled images and video frames. Methodologically, it models image view transformations as actions taken by an agent in a dynamic state space, learns robust visual representations via generalized value function estimation, and flexibly incorporates weak supervision signals (e.g., image-text pairs) as reward signals—thereby overcoming structural constraints inherent in contrastive learning. Key innovations include policy-agnostic value estimation, enhanced spatial modeling, and explicit state-transition modeling. Extensive experiments on real-world datasets—including EpicKitchens, COCO, and CC12M—demonstrate substantial improvements in downstream task performance, validating the effectiveness and generalizability of the RL paradigm for open-domain visual pretraining.

Technology Category

Application Category

📝 Abstract
In reinforcement learning (RL), value-based algorithms learn to associate each observation with the states and rewards that are likely to be reached from it. We observe that many self-supervised image pre-training methods bear similarity to this formulation: learning features that associate crops of images with those of nearby views, e.g., by taking a different crop or color augmentation. In this paper, we complete this analogy and explore a method that directly casts pre-training on unlabeled image data like web crawls and video frames as an RL problem. We train a general value function in a dynamical system where an agent transforms an image by changing the view or adding image augmentations. Learning in this way resembles crop-consistency self-supervision, but through the reward function, offers a simple lever to shape feature learning using curated images or weakly labeled captions when they exist. Our experiments demonstrate improved representations when training on unlabeled images in the wild, including video data like EpicKitchens, scene data like COCO, and web-crawl data like CC12M.
Problem

Research questions and friction points this paper is trying to address.

Pre-training visual models using reinforcement learning on unlabeled images
Leveraging RL to learn features from image crops and augmentations
Improving representations for diverse datasets like video and web-crawls
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for image pre-training
General value function in dynamical systems
Feature learning via reward function shaping
🔎 Similar Papers
No similar papers found.