🤖 AI Summary
This work addresses the performance degradation of standard Proximal Policy Optimization (PPO) in real-world environments where persistent sensor failures induce observation distribution shifts and partial observability. To mitigate this, the authors propose embedding time-series architectures—including Transformers, state space models (SSMs), and recurrent neural networks (RNNs)—into the PPO policy network to infer missing information from historical observations, enabling end-to-end robust control. This study presents the first systematic integration of such temporal models, particularly Transformers, into PPO for handling sensor failures, and provides a theoretical upper bound on reward degradation, highlighting the critical roles of policy smoothness and failure persistence in robustness. Evaluated under high-rate sensor dropout in MuJoCo benchmarks, the Transformer-based policy significantly outperforms MLP, RNN, and SSM baselines, maintaining high returns even under severe sensor loss.
📝 Abstract
Real-world reinforcement learning systems must operate under distributional drift in their observation streams, yet most policy architectures implicitly assume fully observed and noise-free states. We study robustness of Proximal Policy Optimization (PPO) under temporally persistent sensor failures that induce partial observability and representation shift. To respond to this drift, we augment PPO with temporal sequence models, including Transformers and State Space Models (SSMs), to enable policies to infer missing information from history and maintain performance. Under a stochastic sensor failure process, we prove a high-probability bound on infinite-horizon reward degradation that quantifies how robustness depends on policy smoothness and failure persistence. Empirically, on MuJoCo continuous-control benchmarks with severe sensor dropout, we show Transformer-based sequence policies substantially outperform MLP, RNN, and SSM baselines in robustness, maintaining high returns even when large fractions of sensors are unavailable. These results demonstrate that temporal sequence reasoning provides a principled and practical mechanism for reliable operation under observation drift caused by sensor unreliability.