🤖 AI Summary
To address low sample efficiency and severe loss of joint state-action information in parametric action Markov decision processes (PAMDPs), this paper proposes FLEXplore, a model-based reinforcement learning algorithm. Methodologically, FLEXplore introduces (1) parametric action-conditional dynamics modeling—explicitly capturing how action parameters influence environmental transitions; (2) a variational mutual information-driven exploration mechanism that maximizes state-action mutual information to improve representation quality; and (3) theoretical guarantees showing reduced trajectory regret under the Wasserstein metric. The algorithm integrates an enhanced model-predictive path integral controller, reward smoothing, and variational lower-bound optimization. Empirically, FLEXplore achieves significant improvements in both sample efficiency and asymptotic performance across multiple standard benchmarks, consistently outperforming state-of-the-art baselines.
📝 Abstract
Hybrid action models are widely considered an effective approach to reinforcement learning (RL) modeling. The current mainstream method is to train agents under Parameterized Action Markov Decision Processes (PAMDPs), which performs well in specific environments. Unfortunately, these models either exhibit drastic low learning efficiency in complex PAMDPs or lose crucial information in the conversion between raw space and latent space. To enhance the learning efficiency and asymptotic performance of the agent, we propose a model-based RL (MBRL) algorithm, FLEXplore. FLEXplore learns a parameterized-action-conditioned dynamics model and employs a modified Model Predictive Path Integral control. Unlike conventional MBRL algorithms, we carefully design the dynamics loss function and reward smoothing process to learn a loose yet flexible model. Additionally, we use the variational lower bound to maximize the mutual information between the state and the hybrid action, enhancing the exploration effectiveness of the agent. We theoretically demonstrate that FLEXplore can reduce the regret of the rollout trajectory through the Wasserstein Metric under given Lipschitz conditions. Our empirical results on several standard benchmarks show that FLEXplore has outstanding learning efficiency and asymptotic performance compared to other baselines.