🤖 AI Summary
To address low training efficiency and slow convergence in reinforcement learning—particularly in sensor neuron systems requiring permutation invariance—this paper proposes an enhanced Sensory Neuron architecture. The core methodological innovation is the first incorporation of a nonlinear key vector mapping (K → K′) into the attention mechanism, enabling richer nonlinear cross-sensor feature interactions while strictly preserving permutation invariance. Critically, this modification introduces no additional parameters or inference overhead. Empirical evaluation demonstrates that the proposed architecture accelerates policy learning substantially: average convergence steps decrease by 37%, and total training time is significantly reduced. Moreover, policy performance matches or exceeds that of the original Sensory Neuron architecture and leading baselines across multiple RL benchmark tasks. This work establishes a new paradigm for efficient, structure-aware joint perception-decision modeling under strict architectural constraints.
📝 Abstract
Training reinforcement learning (RL) agents often requires significant computational resources and extended training times. To address this, we build upon the foundation laid by Google Brain's Sensory Neuron, which introduced a novel neural architecture for reinforcement learning tasks that maintained permutation in-variance in the sensory neuron system. While the baseline model demonstrated significant performance improvements over traditional approaches, we identified opportunities to enhance the efficiency of the learning process further. We propose a modified attention mechanism incorporating a non-linear transformation of the key vectors (K) using a mapping function, resulting in a new set of key vectors (K'). This non-linear mapping enhances the representational capacity of the attention mechanism, allowing the model to encode more complex feature interactions and accelerating convergence without compromising performance. Our enhanced model demonstrates significant improvements in learning efficiency, showcasing the potential for non-linear attention mechanisms in advancing reinforcement learning algorithms.