🤖 AI Summary
This work addresses the limitations of large language models, which, trained exclusively on static text, struggle to verify reasoning or adapt in open, dynamic environments. To overcome this, the paper proposes a human cognition-inspired closed-loop intelligence framework that unifies thinking, acting, learning, reflection, and task scheduling into a cohesive internal reasoning process. The framework enables an action-driven self-optimization mechanism through prototype-guided reasoning, action-mediated expansion of perceptual boundaries, and immediate learning from environmental feedback. Theoretical analysis demonstrates that this approach effectively compensates for the inherent deficiencies of pure language models in reasoning validation and environmental adaptation, substantially enhancing the robustness and interactive efficiency of AI systems in real-world scenarios.
📝 Abstract
Large language models (LLMs) have demonstrated strong capabilities in knowledge representation and reasoning based on textual data. However, their reliance on language material alone limits their ability to adapt, verify reasoning outcomes, and operate effectively in open and dynamic real-world environments. In this paper, we propose Human Simulation Computation (HSC), a human-inspired computational framework that models intelligence as a continuous, closed-loop process involving thinking, action, learning, reflection, and activity scheduling, collectively referred to as the internal reasoning process. HSC emphasizes active participation both within the internal reasoning process and in interactions with the environment, where actions are used not only to achieve goals but also to automatically refine and improve internal reasoning mechanisms without external intervention. Furthermore, HSC incorporates commonly used human thinking strategies across all stages of the internal reasoning process, such as main-feature-oriented reasoning, scope expansion through action, and on-time learning driven by environmental feedback. Through theoretical analysis, we argue that human simulation strategies cannot be fully learned from language material alone, and that human-like reasoning processes and action-grounded reasoning methods are essential for robust adaptation and effective interaction with real-world environments.