ASPECT:Analogical Semantic Policy Execution via Language Conditioned Transfer

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning agents often struggle to generalize to semantically novel yet structurally similar tasks, particularly when confronted with unseen compositional challenges. This work proposes a large language model (LLM)-based zero-shot policy transfer method that leverages natural language conditioning to dynamically remap the semantic descriptions of current observations at test time, aligning them with those of the source task. This alignment drives a text-conditioned variational autoencoder (VAE) to generate imagined states compatible with the original policy, enabling its reuse without retraining. By eschewing fixed categorical taxonomies and instead employing the LLM as a semantic operator for analogical reasoning, the approach supports open-domain task transfer. Experiments demonstrate that the method significantly outperforms existing approaches reliant on predefined mappings across a range of genuinely novel analogical tasks.
📝 Abstract
Reinforcement Learning (RL) agents often struggle to generalize knowledge to new tasks, even those structurally similar to ones they have mastered. Although recent approaches have attempted to mitigate this issue via zero-shot transfer, they are often constrained by predefined, discrete class systems, limiting their adaptability to novel or compositional task variations. We propose a significantly more generalized approach, replacing discrete latent variables with natural language conditioning via a text-conditioned Variational Autoencoder (VAE). Our core innovation utilizes a Large Language Model (LLM) as a dynamic \textit{semantic operator} at test time. Rather than relying on rigid rules, our agent queries the LLM to semantically remap the description of the current observation to align with the source task. This source-aligned caption conditions the VAE to generate an imagined state compatible with the agent's original training, enabling direct policy reuse. By harnessing the flexible reasoning capabilities of LLMs, our approach achieves zero-shot transfer across a broad spectrum of complex and truly novel analogous tasks, moving beyond the limitations of fixed category mappings. Code and videos are available \href{https://anonymous.4open.science/r/ASPECT-85C3/}{here}.
Problem

Research questions and friction points this paper is trying to address.

zero-shot transfer
task generalization
analogical reasoning
reinforcement learning
compositional tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

language-conditioned reinforcement learning
zero-shot transfer
large language models
variational autoencoder
analogical reasoning
🔎 Similar Papers
No similar papers found.