Leveling the Playing Field: Carefully Comparing Classical and Learned Controllers for Quadrotor Trajectory Tracking

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses systematic biases in comparative studies of learning-based (e.g., reinforcement learning) and classical control (e.g., geometric control) methods for quadrotor trajectory tracking. We propose the first symmetric benchmarking framework that eliminates asymmetries across three dimensions: task objective formulation, training data distribution, and access to feedforward information. By unifying task specifications, performing optimal hyperparameter tuning for all methods, and ensuring identical feedforward inputs, we enable a fair, apples-to-apples comparison. Experimental results reveal that the performance gap between the two paradigms is substantially narrower than previously reported. Specifically, geometric control achieves lower steady-state tracking error, making it preferable for low-speed, high-precision positioning; conversely, reinforcement learning exhibits faster transient response, rendering it more suitable for highly agile maneuvers. All code and trained controllers are publicly released.

Technology Category

Application Category

📝 Abstract
Learning-based control approaches like reinforcement learning (RL) have recently produced a slew of impressive results for tasks like quadrotor trajectory tracking and drone racing. Naturally, it is common to demonstrate the advantages of these new controllers against established methods like analytical controllers. We observe, however, that reliably comparing the performance of such very different classes of controllers is more complicated than might appear at first sight. As a case study, we take up the problem of agile tracking of an end-effector for a quadrotor with a fixed arm. We develop a set of best practices for synthesizing the best-in-class RL and geometric controllers (GC) for benchmarking. In the process, we resolve widespread RL-favoring biases in prior studies that provide asymmetric access to: (1) the task definition, in the form of an objective function, (2) representative datasets, for parameter optimization, and (3) feedforward information, describing the desired future trajectory. The resulting findings are the following: our improvements to the experimental protocol for comparing learned and classical controllers are critical, and each of the above asymmetries can yield misleading conclusions. Prior works have claimed that RL outperforms GC, but we find the gaps between the two controller classes are much smaller than previously published when accounting for symmetric comparisons. Geometric control achieves lower steady-state error than RL, while RL has better transient performance, resulting in GC performing better in relatively slow or less agile tasks, but RL performing better when greater agility is required. Finally, we open-source implementations of geometric and RL controllers for these aerial vehicles, implementing best practices for future development. Website and code is available at https://pratikkunapuli.github.io/rl-vs-gc/
Problem

Research questions and friction points this paper is trying to address.

Comparing classical and learned controllers for quadrotor trajectory tracking
Addressing biases in prior studies favoring reinforcement learning controllers
Developing best practices for symmetric benchmarking of controller performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Develops best practices for RL and GC benchmarking
Resolves RL-favoring biases in prior studies
Open-sources geometric and RL controller implementations
🔎 Similar Papers
No similar papers found.