Learning to Swim: Reinforcement Learning for 6-DOF Control of Thruster-driven Autonomous Underwater Vehicles

📅 2024-09-30
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
Achieving high-precision six-degree-of-freedom (6-DOF) motion control for small propeller-driven autonomous underwater vehicles (AUVs) remains challenging under complex nonlinear hydrodynamics, time-varying payloads, and operational disturbances. Method: This paper proposes a minute-scale trainable, end-to-end reinforcement learning control framework that directly maps 6-DOF reference commands to individual thruster outputs. It leverages a highly parallelized, high-fidelity underwater dynamics simulator, integrates domain randomization with command-conditioned Proximal Policy Optimization (PPO), and enables zero-shot sim-to-real transfer without real-world parameter tuning. Contribution/Results: The controller achieves tracking accuracy comparable to hand-tuned PID on a physical AUV, while offering rapid reconfigurability and strong generalization across diverse operating conditions. It significantly enhances robustness and adaptability of autonomous control in complex underwater environments.

Technology Category

Application Category

📝 Abstract
Controlling AUVs can be challenging because of the effect of complex non-linear hydrodynamic forces acting on the robot, which are significant in water and cannot be ignored. The problem is exacerbated for small AUVs for which the dynamics can change significantly with payload changes and deployments under different hydrodynamic conditions. The common approach to AUV control is a combination of passive stabilization with added buoyancy on top and weights on the bottom, and a PID controller tuned for simple and smooth motion primitives. However, the approach comes at the cost of sluggish controls and often the need to re-tune controllers with configuration changes. In this paper, we propose a fast (trainable in minutes), reinforcement learning-based approach for full 6 degree of freedom (DOF) control of a thruster-driven AUVs, taking 6-DOF command-conditioned inputs direct to thruster outputs. We present a new, highly parallelized simulator for underwater vehicle dynamics. We demonstrate this approach through zero-shot sim-to-real (with no tuning) transfer onto a real AUV that produces comparable results to hand-tuned PID controllers. Furthermore, we show that domain randomization on the simulator produces policies that are robust to small variations in vehicle's physical parameters.
Problem

Research questions and friction points this paper is trying to address.

Challenges in controlling AUVs due to non-linear hydrodynamic forces.
Difficulty in adapting control for small AUVs with varying dynamics.
Need for robust, fast, and adaptable 6-DOF control methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for 6-DOF AUV control
Highly parallelized underwater dynamics simulator
Zero-shot sim-to-real transfer with domain randomization
🔎 Similar Papers
No similar papers found.
Levi Cai
Levi Cai
MIT and WHOI
field roboticsmarine roboticsmulti-robot systemsreinforcement learningconstruction robotics
K
Kevin Chang
Oregon State University
Y
Yogesh A. Girdhar
Woods Hole Oceanographic Institution