MonoForce: Self-supervised Learning of Physics-informed Model for Predicting Robot-terrain Interaction

📅 2023-09-16
🏛️ IEEE/RJS International Conference on Intelligent RObots and Systems
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inaccurate trajectory prediction for mobile robots navigating deformable, non-rigid terrains such as tall grass and shrubbery, this paper proposes an end-to-end differentiable physics-aware model. Taking monocular images as input, it tightly couples a black-box vision-based force estimation network with a white-box analytical mechanics trajectory generator, enabling self-supervised training via a differentiable physics engine—without requiring ground-truth force labels. Our key contribution is the first unified differentiable formulation integrating visual force estimation with classical mechanical constraints (e.g., Newton–Euler equations), thereby preserving both interpretability and generalization. Experiments demonstrate that the method matches state-of-the-art performance on rigid terrains while significantly reducing trajectory prediction error—by up to 38%—in challenging tall-grass and shrub environments. The code and benchmark dataset are publicly released.
📝 Abstract
While autonomous navigation of mobile robots on rigid terrain is a well-explored problem, navigating on deformable terrain such as tall grass or bushes remains a challenge. To address it, we introduce an explainable, physics-aware and end-to-end differentiable model which predicts the outcome of robot-terrain interaction from camera images, both on rigid and non-rigid terrain. The proposed MonoForce model consists of a black-box module which predicts robot-terrain interaction forces from onboard cameras, followed by a white-box module, which transforms these forces and a control signals into predicted trajectories, using only the laws of classical mechanics. The differentiable white-box module allows backpropagating the predicted trajectory errors into the black-box module, serving as a self-supervised loss that measures consistency between the predicted forces and ground-truth trajectories of the robot. Experimental evaluation on a public dataset and our data has shown that while the prediction capabilities are comparable to state-of-the-art algorithms on rigid terrain, MonoForce shows superior accuracy on nonrigid terrain such as tall grass or bushes. To facilitate the reproducibility of our results, we release both the code and datasets.
Problem

Research questions and friction points this paper is trying to address.

Robotics
Complex Terrain
Behavior Prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-learning Model
Physics-based Understanding
Non-rigid Terrain Prediction
🔎 Similar Papers
No similar papers found.
R
R. Agishev
VRAS group, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic
Karel Zimmermann
Karel Zimmermann
Czech Technical University
computer visiontrackingrobotics
V
V. Kubelka
RNP Lab of the AASS Research Centre, Örebro University, Örebro, Sweden
M
M. Pecka
VRAS group, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic
T
Tomáš Svoboda
VRAS group, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic