Morphology-Aware Graph Reinforcement Learning for Tensegrity Robot Locomotion

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Tensegrity robots face significant challenges in motion control—including low sample efficiency, poor robustness, and difficulty in sim-to-real transfer—due to their underactuation and strong dynamic coupling. To address these, we propose a reinforcement learning framework integrating morphology-aware graph neural networks (GNNs) with soft actor-critic (SAC). The GNN explicitly encodes the strut-cable topology and inter-component dynamic couplings, enhancing both physical interpretability and sample efficiency of the learned policy; SAC ensures policy stability and efficient exploration. Our approach achieves, for the first time, end-to-end sim-to-real policy transfer without fine-tuning. We successfully deploy three distinct locomotion gaits—straight-line progression and bidirectional turning—on a physical 3-strut tensegrity platform. The resulting trajectories exhibit high accuracy and demonstrate strong robustness against actuator noise and variations in cable stiffness.

Technology Category

Application Category

📝 Abstract
Tensegrity robots combine rigid rods and elastic cables, offering high resilience and deployability but posing major challenges for locomotion control due to their underactuated and highly coupled dynamics. This paper introduces a morphology-aware reinforcement learning framework that integrates a graph neural network (GNN) into the Soft Actor-Critic (SAC) algorithm. By representing the robot's physical topology as a graph, the proposed GNN-based policy captures coupling among components, enabling faster and more stable learning than conventional multilayer perceptron (MLP) policies. The method is validated on a physical 3-bar tensegrity robot across three locomotion primitives, including straight-line tracking and bidirectional turning. It shows superior sample efficiency, robustness to noise and stiffness variations, and improved trajectory accuracy. Notably, the learned policies transfer directly from simulation to hardware without fine-tuning, achieving stable real-world locomotion. These results demonstrate the advantages of incorporating structural priors into reinforcement learning for tensegrity robot control.
Problem

Research questions and friction points this paper is trying to address.

Controls locomotion of underactuated tensegrity robots with coupled dynamics
Improves learning stability and efficiency using graph neural networks
Enables direct policy transfer from simulation to physical hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph neural network integrated with reinforcement learning
Representing robot topology as graph for coupling capture
Direct simulation-to-hardware transfer without fine-tuning
🔎 Similar Papers
No similar papers found.