🤖 AI Summary
Existing quadcopter control policies suffer from poor generalization, overfitting, and inability to perform zero-shot transfer across diverse hardware platforms. Method: We propose RAPTOR—a lightweight, end-to-end neural network-based foundational control policy. Its core innovations include (i) the first single compact network (2,084 parameters) enabling millisecond-level zero-shot adaptation to unseen quadcopters spanning a 32 g–2.4 kg mass range with heterogeneous motors, frames, propellers, and flight controllers; (ii) an implicit recurrent mechanism for contextual learning; and (iii) meta-imitation learning that aggregates knowledge from 1,000 diverse reinforcement learning teacher policies, distilled into a highly generalizable student policy. Results: Experiments demonstrate superior robustness in trajectory tracking, indoor/outdoor flight, wind disturbance rejection, and physical perturbation resilience—significantly overcoming the environment-specific limitations of conventional RL policies.
📝 Abstract
Humans are remarkably data-efficient when adapting to new unseen conditions, like driving a new car. In contrast, modern robotic control systems, like neural network policies trained using Reinforcement Learning (RL), are highly specialized for single environments. Because of this overfitting, they are known to break down even under small differences like the Simulation-to-Reality (Sim2Real) gap and require system identification and retraining for even minimal changes to the system. In this work, we present RAPTOR, a method for training a highly adaptive foundation policy for quadrotor control. Our method enables training a single, end-to-end neural-network policy to control a wide variety of quadrotors. We test 10 different real quadrotors from 32 g to 2.4 kg that also differ in motor type (brushed vs. brushless), frame type (soft vs. rigid), propeller type (2/3/4-blade), and flight controller (PX4/Betaflight/Crazyflie/M5StampFly). We find that a tiny, three-layer policy with only 2084 parameters is sufficient for zero-shot adaptation to a wide variety of platforms. The adaptation through In-Context Learning is made possible by using a recurrence in the hidden layer. The policy is trained through a novel Meta-Imitation Learning algorithm, where we sample 1000 quadrotors and train a teacher policy for each of them using Reinforcement Learning. Subsequently, the 1000 teachers are distilled into a single, adaptive student policy. We find that within milliseconds, the resulting foundation policy adapts zero-shot to unseen quadrotors. We extensively test the capabilities of the foundation policy under numerous conditions (trajectory tracking, indoor/outdoor, wind disturbance, poking, different propellers).