VertiFormer: A Data-Efficient Multi-Task Transformer for Off-Road Robot Mobility

📅 2025-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Modeling robot locomotion over extremely rugged off-road terrain with limited real-world data (only one hour) remains challenging. To address this, we propose a lightweight Transformer-based learning framework explicitly designed for vehicle–terrain dynamic interaction. Our approach innovatively integrates learnable masked modeling and a non-autoregressive multi-task prediction paradigm, enabling unified encoding of pose, action, and terrain patches while supporting cross-task collaborative learning—including forward and inverse kinematics. It further incorporates cross-modal temporal encoding, multi-objective loss optimization, and an efficient decoding mechanism. Evaluated on a real mobile robot, the method significantly improves off-road navigation accuracy, terrain adaptability, and motion prediction robustness under data scarcity. Crucially, it is the first work to empirically demonstrate the feasibility and effectiveness of Transformer architectures for real-time, resource-constrained robotic learning.

Technology Category

Application Category

📝 Abstract
Sophisticated learning architectures, e.g., Transformers, present a unique opportunity for robots to understand complex vehicle-terrain kinodynamic interactions for off-road mobility. While internet-scale data are available for Natural Language Processing (NLP) and Computer Vision (CV) tasks to train Transformers, real-world mobility data are difficult to acquire with physical robots navigating off-road terrain. Furthermore, training techniques specifically designed to process text and image data in NLP and CV may not apply to robot mobility. In this paper, we propose VertiFormer, a novel data-efficient multi-task Transformer model trained with only one hour of data to address such challenges of applying Transformer architectures for robot mobility on extremely rugged, vertically challenging, off-road terrain. Specifically, VertiFormer employs a new learnable masked modeling and next token prediction paradigm to predict the next pose, action, and terrain patch to enable a variety of off-road mobility tasks simultaneously, e.g., forward and inverse kinodynamics modeling. The non-autoregressive design mitigates computational bottlenecks and error propagation associated with autoregressive models. VertiFormer's unified modality representation also enhances learning of diverse temporal mappings and state representations, which, combined with multiple objective functions, further improves model generalization. Our experiments offer insights into effectively utilizing Transformers for off-road robot mobility with limited data and demonstrate our efficiently trained Transformer can facilitate multiple off-road mobility tasks onboard a physical mobile robot.
Problem

Research questions and friction points this paper is trying to address.

Robotics
Complex Terrain
Data Training Methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

VertiFormer
Multi-task Learning
Adaptive Terrain Navigation
Mohammad Nazeri
Mohammad Nazeri
Computer Science student, George Mason University
Computer VisionSelf-supervised LearningRobotics
Anuj Pokhrel
Anuj Pokhrel
George Mason University
RoboticsNavigation
A
Alexandyr Card
Department of Computer Science, George Mason University
A
A. Datar
Department of Computer Science, George Mason University
Garrett Warnell
Garrett Warnell
Research Scientist, Army Research Laboratory
Machine LearningRoboticsArtificial Intelligence
X
Xuesu Xiao
Department of Computer Science, George Mason University