🤖 AI Summary
This work systematically investigates the impact of position controller gains—traditionally selected based on task stiffness or compliance—on three learning paradigms: behavioral cloning, reinforcement learning from scratch, and sim-to-real transfer. Challenging conventional gain-tuning principles, the study advocates for a learnability-oriented gain selection strategy. Through extensive experiments across multiple tasks and robotic platforms, it reveals that behavioral cloning performs best with compliant, overdamped gains; reinforcement learning exhibits robustness to gain variations when hyperparameters are properly tuned; and stiff, overdamped gains significantly degrade sim-to-real transfer performance. These findings demonstrate that the optimal controller gain is dictated primarily by the learning paradigm rather than the intrinsic characteristics of the task itself, thereby overturning established practices in robotic control design.
📝 Abstract
Position controllers have become the dominant interface for executing learned manipulation policies. Yet a critical design decision remains understudied: how should we choose controller gains for policy learning? The conventional wisdom is to select gains based on desired task compliance or stiffness. However, this logic breaks down when controllers are paired with state-conditioned policies: effective stiffness emerges from the interplay between learned reactions and control dynamics, not from gains alone. We argue that gain selection should instead be guided by learnability: how amenable different gain settings are to the learning algorithm in use. In this work, we systematically investigate how position controller gains affect three core components of modern robot learning pipelines: behavior cloning, reinforcement learning from scratch, and sim-to-real transfer. Through extensive experiments across multiple tasks and robot embodiments, we find that: (1) behavior cloning benefits from compliant and overdamped gain regimes, (2) reinforcement learning can succeed across all gain regimes given compatible hyperparameter tuning, and (3) sim-to-real transfer is harmed by stiff and overdamped gain regimes. These findings reveal that optimal gain selection depends not on the desired task behavior, but on the learning paradigm employed. Project website: https://younghyopark.me/tune-to-learn