🤖 AI Summary
Transformer-based architectures suffer from high inference latency, hindering their deployment in real-time robotic closed-loop control. Method: This work pioneers the integration of xLSTM into large-scale action modeling for robotics, replacing self-attention with xLSTM to achieve linear-time inference complexity and robust long-horizon extrapolation, while leveraging an offline reinforcement learning framework to ensure both training scalability and real-time online inference. Contribution/Results: Evaluated on a comprehensive benchmark comprising 432 robot control tasks across six domains, the proposed model matches Transformer performance while substantially accelerating inference—reducing end-to-end latency to the millisecond level. This marks the first practical deployment of a large action model for real-robot closed-loop control, bridging the gap between foundation-model capabilities and real-time robotic autonomy.
📝 Abstract
In recent years, there has been a trend in the field of Reinforcement Learning (RL) towards large action models trained offline on large-scale datasets via sequence modeling. Existing models are primarily based on the Transformer architecture, which result in powerful agents. However, due to slow inference times, Transformer-based approaches are impractical for real-time applications, such as robotics. Recently, modern recurrent architectures, such as xLSTM and Mamba, have been proposed that exhibit parallelization benefits during training similar to the Transformer architecture while offering fast inference. In this work, we study the aptitude of these modern recurrent architectures for large action models. Consequently, we propose a Large Recurrent Action Model (LRAM) with an xLSTM at its core that comes with linear-time inference complexity and natural sequence length extrapolation abilities. Experiments on 432 tasks from 6 domains show that LRAM compares favorably to Transformers in terms of performance and speed.