🤖 AI Summary
Traditional reinforcement learning (RL) agents passively adapt to environmental dynamics, limiting performance in non-stationary settings. Method: We propose a novel paradigm wherein agents actively modify the environment’s dynamics model, formalized via a Multi-layer Configurable Time-Varying Markov Decision Process (MCTVMDP). In this framework, a high-level controller selects “model-modification actions” that reconfigure the transition function of an underlying non-stationary MDP; a low-level policy then executes actions within the modified environment. Both layers are jointly optimized to maximize the expected discounted return. Contribution/Results: This work introduces, for the first time, learnable model-modification actions—enabling proactive environmental restructuring and adaptive dynamical control. Empirical evaluation demonstrates substantial improvements in long-term cumulative reward under non-stationarity, validating that active environment modeling provides fundamental gains in policy performance.
📝 Abstract
Reinforcement learning usually assumes a given or sometimes even fixed environment in which an agent seeks an optimal policy to maximize its long-term discounted reward. In contrast, we consider agents that are not limited to passive adaptations: they instead have model-changing actions that actively modify the RL model of world dynamics itself. Reconfiguring the underlying transition processes can potentially increase the agents' rewards. Motivated by this setting, we introduce the multi-layer configurable time-varying Markov decision process (MCTVMDP). In an MCTVMDP, the lower-level MDP has a non-stationary transition function that is configurable through upper-level model-changing actions. The agent's objective consists of two parts: Optimize the configuration policies in the upper-level MDP and optimize the primitive action policies in the lower-level MDP to jointly improve its expected long-term reward.