๐ค AI Summary
This work addresses the limited generalization of existing models in multimodal action prediction due to inadequate temporal representation, particularly when encountering unseen action sequences. Inspired by the mirror neuron system, the authors propose DMBN-PTE, a novel framework that integrates Conditional Neural Processes (CNP), a Deep Multimodal Fusion Network (DMBN), and Positional Temporal Encoding (PTE). The model leverages self-supervised learning to reconstruct visuomotor signals from partial observations, enabling robust long-horizon action prediction. A key innovation lies in the introduction of positional temporal encoding, which substantially enhances the modelโs capacity to capture temporal dynamics. Experimental results demonstrate that DMBN-PTE achieves superior generalization on unseen action sequences, offering an effective solution for long-term action prediction in robotic applications.
๐ Abstract
Inspired by the human ability to understand and predict others, we study the applicability of Conditional Neural Processes (CNP) to the task of self-supervised multimodal action prediction in robotics. Following recent results regarding the ontogeny of the Mirror Neuron System (MNS), we focus on the preliminary objective of self-actions prediction. We find a good MNS-inspired model in the existing Deep Modality Blending Network (DMBN), able to reconstruct the visuo-motor sensory signal during a partially observed action sequence by leveraging the probabilistic generation of CNP. After a qualitative and quantitative evaluation, we highlight its difficulties in generalizing to unseen action sequences, and identify the cause in its inner representation of time. Therefore, we propose a revised version, termed DMBN-Positional Time Encoding (DMBN-PTE), that facilitates learning a more robust representation of temporal information, and provide preliminary results of its effectiveness in expanding the applicability of the architecture. DMBN-PTE figures as a first step in the development of robotic systems that autonomously learn to forecast actions on longer time scales refining their predictions with incoming observations.