TACO: Temporal Consensus Optimization for Continual Neural Mapping

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of continual adaptation in neural implicit mapping under resource-constrained robotic settings, where existing methods struggle with dynamic environments and rely on replaying historical data, hindering real-world deployment. The authors propose a replay-free continual learning framework that reformulates neural mapping as a time-consistent optimization problem. By maintaining historical model snapshots as a temporal neighborhood and introducing a weighted consensus mechanism to integrate past and current knowledge, the method enables efficient and adaptive geometric updates without storing raw observations. This approach pioneers the integration of time-consistent optimization into neural mapping, facilitating reliable geometric constraint optimization and flexible correction of outdated regions. Experiments demonstrate significant performance gains over current baselines in both simulated and real-world environments, exhibiting strong robustness to dynamic scene changes.

Technology Category

Application Category

📝 Abstract
Neural implicit mapping has emerged as a powerful paradigm for robotic navigation and scene understanding. However, real-world robotic deployment requires continual adaptation to changing environments under strict memory and computation constraints, which existing mapping systems fail to support. Most prior methods rely on replaying historical observations to preserve consistency and assume static scenes. As a result, they cannot adapt to continual learning in dynamic robotic settings. To address these challenges, we propose TACO (TemporAl Consensus Optimization), a replay-free framework for continual neural mapping. We reformulate mapping as a temporal consensus optimization problem, where we treat past model snapshots as temporal neighbors. Intuitively, our approach resembles a model consulting its own past knowledge. We update the current map by enforcing weighted consensus with historical representations. Our method allows reliable past geometry to constrain optimization while permitting unreliable or outdated regions to be revised in response to new observations. TACO achieves a balance between memory efficiency and adaptability without storing or replaying previous data. Through extensive simulated and real-world experiments, we show that TACO robustly adapts to scene changes, and consistently outperforms other continual learning baselines.
Problem

Research questions and friction points this paper is trying to address.

continual learning
neural mapping
dynamic environments
memory constraints
temporal consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual neural mapping
temporal consensus optimization
replay-free learning
dynamic scene adaptation
implicit neural representation
🔎 Similar Papers
2023-10-29IEEE/RJS International Conference on Intelligent RObots and SystemsCitations: 0