Learning Quadrotor Control From Visual Features Using Differentiable Simulation

📅 2024-10-21
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Reinforcement learning (RL) for vision-driven quadrotor control suffers from low sample efficiency and heavy reliance on precise state feedback, resulting in slow training. Method: This paper proposes an end-to-end vision-based closed-loop control framework built upon differentiable simulation. It innovatively integrates a lightweight gradient surrogate model with joint state representation and policy learning, enabling rapid attitude recovery using only image features—without access to ground-truth state feedback. Differentiable physics simulation, visual feature encoding, and gradient-guided optimization collectively accelerate policy convergence and enhance cross-scenario generalization. Results: Experiments demonstrate that the method achieves pure vision-based attitude control within minutes of training—improving sample efficiency by over one order of magnitude compared to standard model-free RL baselines—thereby establishing a new paradigm for low-sample-cost vision-guided UAV control.

Technology Category

Application Category

📝 Abstract
The sample inefficiency of reinforcement learning (RL) remains a significant challenge in robotics. RL requires large-scale simulation and can still cause long training times, slowing research and innovation. This issue is particularly pronounced in vision-based control tasks where reliable state estimates are not accessible. Differentiable simulation offers an alternative by enabling gradient back-propagation through the dynamics model, providing low-variance analytical policy gradients and, hence, higher sample efficiency. However, its usage for real-world robotic tasks has yet been limited. This work demonstrates the great potential of differentiable simulation for learning quadrotor control. We show that training in differentiable simulation significantly outperforms model-free RL in terms of both sample efficiency and training time, allowing a policy to learn to recover a quadrotor in seconds when providing vehicle states and in minutes when relying solely on visual features. The key to our success is two-fold. First, the use of a simple surrogate model for gradient computation greatly accelerates training without sacrificing control performance. Second, combining state representation learning with policy learning enhances convergence speed in tasks where only visual features are observable. These findings highlight the potential of differentiable simulation for real-world robotics and offer a compelling alternative to conventional RL approaches.
Problem

Research questions and friction points this paper is trying to address.

Addresses sample inefficiency in reinforcement learning for robotics.
Improves vision-based quadrotor control using differentiable simulation.
Enhances training speed and efficiency with surrogate models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable simulation enables gradient back-propagation
Surrogate model accelerates training without performance loss
State representation learning enhances visual feature convergence
🔎 Similar Papers
No similar papers found.
J
Johannes Heeg
Robotics and Perception Group, Department of Informatics, University of Zurich, and Department of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland
Yunlong Song
Yunlong Song
Genesis AI
RoboticsLearningControlVision
Davide Scaramuzza
Davide Scaramuzza
Professor of Robotics and Perception, University of Zurich
RoboticsRobot VisionMicro Air VehiclesSLAMRobot Learning