🤖 AI Summary
This work addresses the fragmentation of perception, memory, and decision-making in embodied cognition. We propose AUKAI, a multi-scale embodied cognitive framework that establishes a closed-loop integration of perception–memory–decision processes, incorporating cross-scale error feedback to jointly realize world modeling, state prediction, and intervention utility evaluation. To our knowledge, this is the first embodied intelligence architecture to rigorously unify convergence theory, optimal control, and Bayesian inference. Furthermore, we design a neuro-symbolic hybrid structure that enhances both interpretability and robustness without compromising performance. The framework is empirically validated on robot navigation and obstacle avoidance tasks, supporting deployment in both simulation and real-world environments. Theoretical analysis proves the system’s convergence, Lyapunov stability, and near-optimality under bounded uncertainty.
📝 Abstract
In this paper, we propose AUKAI, an Adaptive Unified Knowledge-Action Intelligence for embodied cognition that seamlessly integrates perception, memory, and decision-making via multi-scale error feedback. Interpreting AUKAI as an embedded world model, our approach simultaneously predicts state transitions and evaluates intervention utility. The framework is underpinned by rigorous theoretical analysis drawn from convergence theory, optimal control, and Bayesian inference, which collectively establish conditions for convergence, stability, and near-optimal performance. Furthermore, we present a hybrid implementation that combines the strengths of neural networks with symbolic reasoning modules, thereby enhancing interpretability and robustness. Finally, we demonstrate the potential of AUKAI through a detailed application in robotic navigation and obstacle avoidance, and we outline comprehensive experimental plans to validate its effectiveness in both simulated and real-world environments.