StateLinFormer: Stateful Training Enhancing Long-term Memory in Navigation

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
📝 Abstract
Effective navigation intelligence relies on long-term memory to support both immediate generalization and sustained adaptation. However, existing approaches face a dilemma: modular systems rely on explicit mapping but lack flexibility, while Transformer-based end-to-end models are constrained by fixed context windows, limiting persistent memory across extended interactions. We introduce StateLinFormer, a linear-attention navigation model trained with a stateful memory mechanism that preserves recurrent memory states across consecutive training segments instead of reinitializing them at each batch boundary. This training paradigm effectively approximates learning on infinitely long sequences, enabling the model to achieve long-horizon memory retention. Experiments across both MAZE and ProcTHOR environments demonstrate that StateLinFormer significantly outperforms its stateless linear-attention counterpart and standard Transformer baselines with fixed context windows. Notably, as interaction length increases, persistent stateful training substantially improves context-dependent adaptation, suggesting an enhancement in the model's In-Context Learning (ICL) capabilities for navigation tasks.
Problem

Research questions and friction points this paper is trying to address.

long-term memory
navigation
Transformer
context window
In-Context Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stateful Training
Long-term Memory
Linear Attention
In-Context Learning
Navigation
🔎 Similar Papers
No similar papers found.
Z
Zhiyuan Chen
Shenzhen Institute of Artificial Intelligence and Roboti cs for Society, Shenzhen, China
Y
Yuxuan Zhong
Shenzhen Institute of Artificial Intelligence and Roboti cs for Society, Shenzhen, China
F
Fan Wang
Shenzhen Institute of Artificial Intelligence and Roboti cs for Society, Shenzhen, China
Bo Yu
Bo Yu
School of Artificial Intelligence, Jilin University, China
Medical Data AnalysisMachine LearningComputer Vision
P
Pengtao Shao
Shenzhen Institute of Artificial Intelligence and Roboti cs for Society, Shenzhen, China
Shaoshan Liu
Shaoshan Liu
PerceptIn
Embodied AIAutonomous Machine ComputingComputer SystemsTechnology Policy
N
Ning Ding
Shenzhen Institute of Artificial Intelligence and Roboti cs for Society, Shenzhen, China