🤖 AI Summary
This work addresses the challenge of applying reinforcement learning to air traffic control, where directly generating operationally compliant compound instructions is difficult and high-dimensional action spaces hinder training. To overcome this, the authors propose an online action stacking mechanism that dynamically composes a small set of five basic discrete actions—learned during training—into practical compound commands during inference, achieving performance comparable to a 37-dimensional policy. Built upon the Proximal Policy Optimization (PPO) algorithm and trained on the BluebirdDT digital twin platform, the approach incorporates an action damping penalty to regulate instruction frequency. Experimental results demonstrate that the method significantly reduces the number of issued commands in lateral navigation tasks while effectively resolving two-aircraft conflicts and managing altitude, thereby balancing training efficiency with operational realism.
📝 Abstract
We introduce online action-stacking, an inference-time wrapper for reinforcement learning policies that produces realistic air traffic control commands while allowing training on a much smaller discrete action space. Policies are trained with simple incremental heading or level adjustments, together with an action-damping penalty that reduces instruction frequency and leads agents to issue commands in short bursts. At inference, online action-stacking compiles these bursts of primitive actions into domain-appropriate compound clearances. Using Proximal Policy Optimisation and the BluebirdDT digital twin platform, we train agents to navigate aircraft along lateral routes, manage climb and descent to target flight levels, and perform two-aircraft collision avoidance under a minimum separation constraint. In our lateral navigation experiments, action stacking greatly reduces the number of issued instructions relative to a damped baseline and achieves comparable performance to a policy trained with a 37-dimensional action space, despite operating with only five actions. These results indicate that online action-stacking helps bridge a key gap between standard reinforcement learning formulations and operational ATC requirements, and provides a simple mechanism for scaling to more complex control scenarios.