🤖 AI Summary
To address the inefficiency and lack of interpretability arising from emergent communication protocols in multi-agent reinforcement learning (MARL), this work introduces human-defined natural language as a structured prior into embodied MARL frameworks. Methodologically, we propose an end-to-end architecture integrating a language encoder-decoder, a joint action-language policy network, and a language-supervised representation alignment mechanism—enabling agents to generate and comprehend linguistically grounded observations, thereby co-optimizing explicit communication and representation learning. To our knowledge, this is the first systematic integration of natural language in MARL both as an interpretable communication medium and as a representational guidance signal. Experiments demonstrate that our approach significantly outperforms emergent-communication baselines on collaborative tasks; internal representations exhibit a 32% increase in information content, cross-partner generalization accuracy improves by 27%, and success rate in following human instructions rises by 41%.
📝 Abstract
Communication is a fundamental aspect of coordinated behavior in multi-agent reinforcement learning. Yet, most prior works in this field have focused on emergent communication protocols developed from scratch, often resulting in inefficient or non-interpretable systems. Inspired by the role of language in natural intelligence, we investigate how grounding agents in a human-defined language can improve learning and coordination of multiple embodied agents. We propose a framework in which agents are trained not only to act but also to produce and interpret natural language descriptions of their observations. This language-augmented learning serves a dual role: enabling explicit communication between agents and guiding representation learning. We demonstrate that agents trained with our method outperform traditional emergent communication baselines across various tasks. Our analysis reveals that language grounding leads to more informative internal representations, better generalization to new partners, and improved capability for human-agent interaction. These findings demonstrate the effectiveness of integrating structured language into multi-agent learning and open avenues for more interpretable and capable multi-agent systems.