Robust Visual Embodiment: How Robots Discover Their Bodies in Real Environments

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing autonomous self-modeling approaches exhibit insufficient robustness under realistic visual degradations—such as noise, blur, and cluttered backgrounds—severely limiting robotic morphological perception, trajectory planning, and damage recovery. This paper introduces the first interference-resilient framework for vision-based self-modeling, integrating task-aware denoising, morphology-preserving regularization, and semantic-segmentation-guided ego-body separation to ensure geometric consistency and semantic fidelity of morphological representations under degradation. Evaluated in both simulation and real-world experiments, the method significantly improves self-modeling accuracy across diverse degradations—including Gaussian noise, salt-and-pepper noise, and motion blur—achieving performance close to the clean-condition baseline in morphological prediction, motion planning, and damage recovery tasks, and outperforming state-of-the-art methods. This work establishes a new paradigm for reliable deployment of self-perceptive robots in unstructured real-world environments.

Technology Category

Application Category

📝 Abstract
Robots with internal visual self-models promise unprecedented adaptability, yet existing autonomous modeling pipelines remain fragile under realistic sensing conditions such as noisy imagery and cluttered backgrounds. This paper presents the first systematic study quantifying how visual degradations--including blur, salt-and-pepper noise, and Gaussian noise--affect robotic self-modeling. Through both simulation and physical experiments, we demonstrate their impact on morphology prediction, trajectory planning, and damage recovery in state-of-the-art pipelines. To overcome these challenges, we introduce a task-aware denoising framework that couples classical restoration with morphology-preserving constraints, ensuring retention of structural cues critical for self-modeling. In addition, we integrate semantic segmentation to robustly isolate robots from cluttered and colorful scenes. Extensive experiments show that our approach restores near-baseline performance across simulated and physical platforms, while existing pipelines degrade significantly. These contributions advance the robustness of visual self-modeling and establish practical foundations for deploying self-aware robots in unpredictable real-world environments.
Problem

Research questions and friction points this paper is trying to address.

Quantifying visual degradation effects on robotic self-modeling performance
Overcoming sensing fragility in autonomous visual self-modeling pipelines
Enhancing robot morphology prediction and damage recovery robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Task-aware denoising framework with morphology-preserving constraints
Semantic segmentation to isolate robots from cluttered scenes
Restores baseline performance under visual degradations
🔎 Similar Papers
No similar papers found.
S
Salim Rezvani
Department of Mechanical, Industrial and Mechatronics Engineering, Toronto Metropolitan University, Toronto, Canada
A
Ammar Jaleel Mahmood
Department of Mechanical, Industrial and Mechatronics Engineering, Toronto Metropolitan University, Toronto, Canada
Robin Chhabra
Robin Chhabra
Professor of Robotics & Mechatronics, Toronto Metropolitan University
Soft RoboticsEmbodied AIMulti-Robot SystemsRobotic Self PerceptionGeometric Mechanics