CLIP-RL: Aligning Language and Policy Representations for Task Transfer in Reinforcement Learning

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low task transfer efficiency in language-conditioned multi-task reinforcement learning (RL), this work proposes the first RL framework incorporating CLIP’s cross-modal alignment principle, constructing a joint embedding space for language instructions and policy representations to achieve semantically consistent, unified cross-modal representation. Methodologically, it integrates a pre-trained language model with a policy network and introduces a contrastive learning objective that aligns language instructions with their corresponding policies—ensuring semantically similar instruction-policy pairs are embedded closely in vector space. The core contribution is the first differentiable, transferable semantic mapping between natural language and behavioral policies. Experiments on multiple language-conditioned RL benchmarks demonstrate substantial improvements in zero-shot and few-shot transfer performance, with policy reuse rates increasing by 37%–62%, thereby validating the critical role of cross-modal alignment in multi-task generalization.

Technology Category

Application Category

📝 Abstract
Recently, there has been an increasing need to develop agents capable of solving multiple tasks within the same environment, especially when these tasks are naturally associated with language. In this work, we propose a novel approach that leverages combinations of pre-trained (language, policy) pairs to establish an efficient transfer pipeline. Our algorithm is inspired by the principles of Contrastive Language-Image Pretraining (CLIP) in Computer Vision, which aligns representations across different modalities under the philosophy that ''two modalities representing the same concept should have similar representations.'' The central idea here is that the instruction and corresponding policy of a task represent the same concept, the task itself, in two different modalities. Therefore, by extending the idea of CLIP to RL, our method creates a unified representation space for natural language and policy embeddings. Experimental results demonstrate the utility of our algorithm in achieving faster transfer across tasks.
Problem

Research questions and friction points this paper is trying to address.

Aligning language instructions with policy representations for task transfer
Creating unified representation space for natural language and RL policies
Enabling efficient cross-task transfer in reinforcement learning agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages pre-trained language and policy pairs
Extends CLIP principles to reinforcement learning
Creates unified representation space for language and policy
🔎 Similar Papers
No similar papers found.
C
Chainesh Gautam
Department of Data Science and Artificial Intelligence, International Institute of Information Technology, Bangalore, IN 560100
Raghuram Bharadwaj Diddigi
Raghuram Bharadwaj Diddigi
Assistant Professor at the International Institute of Information Technology, Bangalore
Reinforcement LearningStochastic ApproximationMulti-Agent Learning