🤖 AI Summary
Autonomous systems face a fundamental trade-off among computational overhead, energy consumption, and model interpretability in occupancy grid map (OGM) modeling. To address this, we propose VSA-OGM—the first OGM framework integrating hyperdimensional computing (VSA) with Fourier-domain vector binding and Shannon entropy-driven probabilistic updating. Unlike conventional dense statistical inference or neural-network-based approaches requiring extensive domain-specific training, VSA-OGM achieves real-time inference, ultra-low power consumption, and strong interpretability without any training. Experiments show that VSA-OGM matches the accuracy of covariance propagation while reducing inference latency by 200× and memory footprint by 1000×. It further cuts latency by 3.7× compared to non-deformable traditional methods and outperforms state-of-the-art neural OGMs by 1.5× in speed—entirely training-free.
📝 Abstract
Real-time robotic systems require advanced perception, computation, and action capability. However, the main bottleneck in current autonomous systems is the trade-off between computational capability, energy efficiency and model determinism. World modeling, a key objective of many robotic systems, commonly uses occupancy grid mapping (OGM) as the first step towards building an end-to-end robotic system with perception, planning, autonomous maneuvering, and decision making capabilities. OGM divides the environment into discrete cells and assigns probability values to attributes such as occupancy and traversability. Existing methods fall into two categories: traditional methods and neural methods. Traditional methods rely on dense statistical calculations, while neural methods employ deep learning for probabilistic information processing. Recent works formulate a deterministic theory of neural computation at the intersection of cognitive science and vector symbolic architectures. In this study, we propose a Fourier-based hyperdimensional OGM system, VSA-OGM, combined with a novel application of Shannon entropy that retains the interpretability and stability of traditional methods along with the improved computational efficiency of neural methods. Our approach, validated across multiple datasets, achieves similar accuracy to covariant traditional methods while approximately reducing latency by 200x and memory by 1000x. Compared to invariant traditional methods, we see similar accuracy values while reducing latency by 3.7x. Moreover, we achieve 1.5x latency reductions compared to neural methods while eliminating the need for domain-specific model training.