Efficient Learning-Based Control of a Legged Robot in Lunar Gravity

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low energy efficiency and poor gravity adaptability of legged robots in low-gravity environments (e.g., Moon, Mars, asteroids), this paper proposes a reinforcement learning–based gravity-adaptive control framework. We innovatively design a gravity-scaled power-optimization reward function to enable policy transfer across diverse gravitational fields—from lunar gravity (0.16 g) to hyper-Earth conditions (up to 2 g). By integrating analytical dynamics modeling with end-to-end policy training, we enhance locomotor energy efficiency. A spring-based constant-force unloading system is developed to physically emulate lunar gravity (0.16 g) on Earth for experimental validation. Results show that at 0.4 m/s walking speed under Earth gravity, power consumption is reduced by 23% to 23.4 W versus baseline; under lunar gravity, it further drops to 12.2 W—a 36% reduction. This work is the first to jointly integrate gravity-scaled reward shaping with a physically verifiable unloading platform, significantly improving motion energy efficiency and environmental adaptability of planetary rovers under stringent power and thermal constraints.

Technology Category

Application Category

📝 Abstract
Legged robots are promising candidates for exploring challenging areas on low-gravity bodies such as the Moon, Mars, or asteroids, thanks to their advanced mobility on unstructured terrain. However, as planetary robots' power and thermal budgets are highly restricted, these robots need energy-efficient control approaches that easily transfer to multiple gravity environments. In this work, we introduce a reinforcement learning-based control approach for legged robots with gravity-scaled power-optimized reward functions. We use our approach to develop and validate a locomotion controller and a base pose controller in gravity environments from lunar gravity (1.62 m/s2) to a hypothetical super-Earth (19.62 m/s2). Our approach successfully scales across these gravity levels for locomotion and base pose control with the gravity-scaled reward functions. The power-optimized locomotion controller reached a power consumption for locomotion of 23.4 W in Earth gravity on a 15.65 kg robot at 0.4 m/s, a 23 % improvement over the baseline policy. Additionally, we designed a constant-force spring offload system that allowed us to conduct real-world experiments on legged locomotion in lunar gravity. In lunar gravity, the power-optimized control policy reached 12.2 W, 36 % less than a baseline controller which is not optimized for power efficiency. Our method provides a scalable approach to developing power-efficient locomotion controllers for legged robots across multiple gravity levels.
Problem

Research questions and friction points this paper is trying to address.

Developing energy-efficient legged robot control for lunar gravity exploration
Creating gravity-scaled reinforcement learning controllers for multiple planetary environments
Reducing power consumption in legged robots across varying gravity levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning-based control approach
Gravity-scaled power-optimized reward functions
Constant-force spring offload system
🔎 Similar Papers
No similar papers found.
Philip Arm
Philip Arm
ETH Zurich
RoboticsSpace RoboticsLegged RobotsField RoboticsManipulation
O
Oliver Fischer
ETH Zurich, Robotics Systems Lab; Leonhardstrasse 21, 8092 Zurich, Switzerland
J
Joseph Church
ETH Zurich, Robotics Systems Lab; Leonhardstrasse 21, 8092 Zurich, Switzerland
A
Adrian Fuhrer
ETH Zurich, Robotics Systems Lab; Leonhardstrasse 21, 8092 Zurich, Switzerland
Hendrik Kolvenbach
Hendrik Kolvenbach
Robotic Systems Lab, ETH Zurich
Robotic SystemsMechatronicsField RoboticsSpace Exploration
Marco Hutter
Marco Hutter
Professor of Robotics, ETH Zurich
Legged RoboticsRoboticsControl