🤖 AI Summary
Tensegrity robots face significant challenges in motion control—including low sample efficiency, poor robustness, and difficulty in sim-to-real transfer—due to their underactuation and strong dynamic coupling. To address these, we propose a reinforcement learning framework integrating morphology-aware graph neural networks (GNNs) with soft actor-critic (SAC). The GNN explicitly encodes the strut-cable topology and inter-component dynamic couplings, enhancing both physical interpretability and sample efficiency of the learned policy; SAC ensures policy stability and efficient exploration. Our approach achieves, for the first time, end-to-end sim-to-real policy transfer without fine-tuning. We successfully deploy three distinct locomotion gaits—straight-line progression and bidirectional turning—on a physical 3-strut tensegrity platform. The resulting trajectories exhibit high accuracy and demonstrate strong robustness against actuator noise and variations in cable stiffness.
📝 Abstract
Tensegrity robots combine rigid rods and elastic cables, offering high resilience and deployability but posing major challenges for locomotion control due to their underactuated and highly coupled dynamics. This paper introduces a morphology-aware reinforcement learning framework that integrates a graph neural network (GNN) into the Soft Actor-Critic (SAC) algorithm. By representing the robot's physical topology as a graph, the proposed GNN-based policy captures coupling among components, enabling faster and more stable learning than conventional multilayer perceptron (MLP) policies. The method is validated on a physical 3-bar tensegrity robot across three locomotion primitives, including straight-line tracking and bidirectional turning. It shows superior sample efficiency, robustness to noise and stiffness variations, and improved trajectory accuracy. Notably, the learned policies transfer directly from simulation to hardware without fine-tuning, achieving stable real-world locomotion. These results demonstrate the advantages of incorporating structural priors into reinforcement learning for tensegrity robot control.