MAGE: Meta-Reinforcement Learning for Language Agents toward Strategic Exploration and Exploitation

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the suboptimal performance of large language model (LLM) agents in non-stationary multi-agent environments, where they often struggle due to insufficient long-term strategic exploration and exploitation capabilities. To overcome this limitation, the authors propose MAGE, a novel framework that introduces meta-reinforcement learning to LLM agents for the first time. MAGE optimizes agent policies by integrating historical experiences and reflective reasoning across multiple interaction rounds, guided by the final-round reward as the optimization objective. The framework further enhances diversity, learning stability, and strategic adaptability through population-based training, agent-specific advantage normalization, and contextual ensemble mechanisms. Experimental results demonstrate that MAGE significantly outperforms existing baselines on exploration–exploitation tasks and exhibits strong generalization to unseen opponents.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM) agents have demonstrated remarkable proficiency in learned tasks, yet they often struggle to adapt to non-stationary environments with feedback. While In-Context Learning and external memory offer some flexibility, they fail to internalize the adaptive ability required for long-term improvement. Meta-Reinforcement Learning (meta-RL) provides an alternative by embedding the learning process directly within the model. However, existing meta-RL approaches for LLMs focus primarily on exploration in single-agent settings, neglecting the strategic exploitation necessary for multi-agent environments. We propose MAGE, a meta-RL framework that empowers LLM agents for strategic exploration and exploitation. MAGE utilizes a multi-episode training regime where interaction histories and reflections are integrated into the context window. By using the final episode reward as the objective, MAGE incentivizes the agent to refine its strategy based on past experiences. We further combine population-based training with an agent-specific advantage normalization technique to enrich agent diversity and ensure stable learning. Experiment results show that MAGE outperforms existing baselines in both exploration and exploitation tasks. Furthermore, MAGE exhibits strong generalization to unseen opponents, suggesting it has internalized the ability for strategic exploration and exploitation. Code is available at https://github.com/Lu-Yang666/MAGE.
Problem

Research questions and friction points this paper is trying to address.

Large Language Model
Meta-Reinforcement Learning
Strategic Exploration
Strategic Exploitation
Multi-agent Environment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-Reinforcement Learning
Language Agents
Strategic Exploration and Exploitation
Population-Based Training
Advantage Normalization
🔎 Similar Papers
No similar papers found.