RAPTOR: A Foundation Policy for Quadrotor Control

📅 2025-09-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing quadcopter control policies suffer from poor generalization, overfitting, and inability to perform zero-shot transfer across diverse hardware platforms. Method: We propose RAPTOR—a lightweight, end-to-end neural network-based foundational control policy. Its core innovations include (i) the first single compact network (2,084 parameters) enabling millisecond-level zero-shot adaptation to unseen quadcopters spanning a 32 g–2.4 kg mass range with heterogeneous motors, frames, propellers, and flight controllers; (ii) an implicit recurrent mechanism for contextual learning; and (iii) meta-imitation learning that aggregates knowledge from 1,000 diverse reinforcement learning teacher policies, distilled into a highly generalizable student policy. Results: Experiments demonstrate superior robustness in trajectory tracking, indoor/outdoor flight, wind disturbance rejection, and physical perturbation resilience—significantly overcoming the environment-specific limitations of conventional RL policies.

Technology Category

Application Category

📝 Abstract
Humans are remarkably data-efficient when adapting to new unseen conditions, like driving a new car. In contrast, modern robotic control systems, like neural network policies trained using Reinforcement Learning (RL), are highly specialized for single environments. Because of this overfitting, they are known to break down even under small differences like the Simulation-to-Reality (Sim2Real) gap and require system identification and retraining for even minimal changes to the system. In this work, we present RAPTOR, a method for training a highly adaptive foundation policy for quadrotor control. Our method enables training a single, end-to-end neural-network policy to control a wide variety of quadrotors. We test 10 different real quadrotors from 32 g to 2.4 kg that also differ in motor type (brushed vs. brushless), frame type (soft vs. rigid), propeller type (2/3/4-blade), and flight controller (PX4/Betaflight/Crazyflie/M5StampFly). We find that a tiny, three-layer policy with only 2084 parameters is sufficient for zero-shot adaptation to a wide variety of platforms. The adaptation through In-Context Learning is made possible by using a recurrence in the hidden layer. The policy is trained through a novel Meta-Imitation Learning algorithm, where we sample 1000 quadrotors and train a teacher policy for each of them using Reinforcement Learning. Subsequently, the 1000 teachers are distilled into a single, adaptive student policy. We find that within milliseconds, the resulting foundation policy adapts zero-shot to unseen quadrotors. We extensively test the capabilities of the foundation policy under numerous conditions (trajectory tracking, indoor/outdoor, wind disturbance, poking, different propellers).
Problem

Research questions and friction points this paper is trying to address.

Training adaptive quadrotor policies for diverse real-world conditions
Overcoming simulation-to-reality gap in neural network control systems
Enabling zero-shot adaptation to various quadrotor configurations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Recurrent neural network for in-context learning
Meta-imitation learning with teacher distillation
Single policy adapts zero-shot to diverse quadrotors
🔎 Similar Papers
No similar papers found.
Jonas Eschmann
Jonas Eschmann
PhD student, UC Berkeley
reinforcement learningrobotics
D
Dario Albani
Autonomous Robotics Research Center, Technology Innovation Institute, Abu Dhabi, UAE.
Giuseppe Loianno
Giuseppe Loianno
UC Berkeley
RoboticsMAVsVisionSensor Fusion