🤖 AI Summary
Reinforcement learning agents struggle to generalize to novel tasks and environments without parameter updates, primarily because their policies and representations overfit to task-specific details of the training environment. To address this, we propose CORAL, the first framework to formulate in-context reinforcement learning (ICRL) as an emergent two-agent communication problem: an information agent—pretrained as a world model—generates compact, causally influential context messages; a fixed control agent then leverages these messages for zero-shot adaptation to unseen environments. CORAL introduces a causal influence loss to constrain the communication protocol, explicitly decoupling representation learning from control, and adopts a two-stage pretraining-deployment architecture. Experiments demonstrate that CORAL achieves efficient zero-shot transfer in previously unobserved, sparse-reward environments, substantially improving sample efficiency. These results validate the efficacy and transferability of causal communication representations for generalization in RL.
📝 Abstract
Reinforcement learning (RL) agents often struggle to generalize to new tasks and contexts without updating their parameters, mainly because their learned representations and policies are overfit to the specifics of their training environments. To boost agents' in-context RL (ICRL) ability, this work formulates ICRL as a two-agent emergent communication problem and introduces CORAL (Communicative Representation for Adaptive RL), a framework that learns a transferable communicative context by decoupling latent representation learning from control. In CORAL, an Information Agent (IA) is pre-trained as a world model on a diverse distribution of tasks. Its objective is not to maximize task reward, but to build a world model and distill its understanding into concise messages. The emergent communication protocol is shaped by a novel Causal Influence Loss, which measures the effect that the message has on the next action. During deployment, the previously trained IA serves as a fixed contextualizer for a new Control Agent (CA), which learns to solve tasks by interpreting the provided communicative context. Our experiments demonstrate that this approach enables the CA to achieve significant gains in sample efficiency and successfully perform zero-shot adaptation with the help of pre-trained IA in entirely unseen sparse-reward environments, validating the efficacy of learning a transferable communicative representation.