🤖 AI Summary
This study addresses the developmental co-modeling of top-down (TD) and bottom-up (BU) mechanisms in robotic visual attention. Existing approaches struggle to dynamically shift attentional patterns from saliency-driven to prediction-driven behavior. To overcome this, we propose A3RNN—a bidirectional RNN architecture that jointly integrates TD signals generated by internal predictive models with BU cues derived from saliency detection, and employs imitation learning to achieve adaptive coupling of these streams during training. Crucially, A3RNN self-organizes structured, temporally coherent, and cognitively interpretable attention trajectories without explicit stability constraints. Experiments on robotic manipulation tasks demonstrate that A3RNN significantly outperforms state-of-the-art baselines, achieving unified improvements in attention interpretability, task performance, and developmental plausibility.
📝 Abstract
This study investigates the developmental interaction between top-down (TD) and bottom-up (BU) visual attention in robotic learning. Our goal is to understand how structured, human-like attentional behavior emerges through the mutual adaptation of TD and BU mechanisms over time. To this end, we propose a novel attention model $A^3 RNN$ that integrates predictive TD signals and saliency-based BU cues through a bi-directional attention architecture.
We evaluate our model in robotic manipulation tasks using imitation learning. Experimental results show that attention behaviors evolve throughout training, from saliency-driven exploration to prediction-driven direction. Initially, BU attention highlights visually salient regions, which guide TD processes, while as learning progresses, TD attention stabilizes and begins to reshape what is perceived as salient. This trajectory reflects principles from cognitive science and the free-energy framework, suggesting the importance of self-organizing attention through interaction between perception and internal prediction. Although not explicitly optimized for stability, our model exhibits more coherent and interpretable attention patterns than baselines, supporting the idea that developmental mechanisms contribute to robust attention formation.