Risk-Aware Reinforcement Learning with Bandit-Based Adaptation for Quadrupedal Locomotion

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of stable locomotion for quadrupedal robots in unknown environments—characterized by dynamics mismatches, contact uncertainty, perception noise, and complex terrain—this paper proposes a risk-aware reinforcement learning framework. First, a family of policies with varying robustness levels is trained under conditional value-at-risk (CVaR) constraints to explicitly model risk sensitivity. Second, a model-free, episode-level multi-armed bandit (MAB) mechanism is introduced to autonomously select the optimal policy online without prior environmental knowledge or offline hyperparameter tuning. The framework enables real-time adaptation during deployment. Evaluated across eight unseen simulation scenarios and on the Unitree Go2 hardware platform, it achieves over 1.9× improvement in both average and tail-performance metrics compared to baselines, with optimal policy convergence within two minutes. Our key contribution is the first integration of CVaR-constrained policy optimization with model-free MAB-based online selection, enabling adjustable risk preference and autonomous, graded robustness adaptation in real time.

Technology Category

Application Category

📝 Abstract
In this work, we study risk-aware reinforcement learning for quadrupedal locomotion. Our approach trains a family of risk-conditioned policies using a Conditional Value-at-Risk (CVaR) constrained policy optimization technique that provides improved stability and sample efficiency. At deployment, we adaptively select the best performing policy from the family of policies using a multi-armed bandit framework that uses only observed episodic returns, without any privileged environment information, and adapts to unknown conditions on the fly. Hence, we train quadrupedal locomotion policies at various levels of robustness using CVaR and adaptively select the desired level of robustness online to ensure performance in unknown environments. We evaluate our method in simulation across eight unseen settings (by changing dynamics, contacts, sensing noise, and terrain) and on a Unitree Go2 robot in previously unseen terrains. Our risk-aware policy attains nearly twice the mean and tail performance in unseen environments compared to other baselines and our bandit-based adaptation selects the best-performing risk-aware policy in unknown terrain within two minutes of operation.
Problem

Research questions and friction points this paper is trying to address.

Developing risk-aware reinforcement learning for quadruped locomotion
Adaptively selecting robust policies using bandit framework
Ensuring performance in unknown environments without privileged information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Risk-aware policies trained using CVaR constraints
Multi-armed bandit framework adaptively selects policies
No privileged environment information required during deployment
🔎 Similar Papers
No similar papers found.
Y
Yuanhong Zeng
Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles, CA, USA
Anushri Dixit
Anushri Dixit
Assistant Professor, UCLA
RoboticsDynamics and ControlMotion Planning