🤖 AI Summary
This work addresses the challenge of self-occlusion in robotic task execution, which often impedes accurate perception of action progress from a single viewpoint and compromises the safety and efficiency of human-robot collaboration. To mitigate this issue, the paper introduces multi-view perception into the action progress prediction task for the first time, proposing a deep learning architecture that effectively fuses visual information from multiple camera perspectives. Evaluated on the Mobile ALOHA platform, the proposed method significantly outperforms single-view baselines, demonstrating marked improvements in both accuracy and robustness for modeling action progress. This advancement offers a novel pathway toward more intelligent decision-making and efficient human-robot collaboration.
📝 Abstract
For robots to operate effectively and safely alongside humans, they must be able to understand the progress of ongoing actions. This ability, known as action progress prediction, is critical for tasks ranging from timely assistance to autonomous decision-making. However, modeling action progression in robotics has often been overlooked. Moreover, a single camera may be insufficient for understanding robot's ego-actions, as self-occlusion can significantly hinder perception and model performance. In this paper, we propose a multi-view architecture for action progress prediction in robot manipulation tasks. Experiments on Mobile ALOHA demonstrate the effectiveness of the proposed approach.