🤖 AI Summary
This work proposes a novel framework inspired by human cognition that treats question generation as a first-class decision-making process, enabling AI systems to autonomously identify tasks in dynamic, open-ended environments. Unlike conventional large language model–driven systems that rely on predefined tasks and static prompts, the proposed approach integrates intrinsic motivation, environmental perception, and multi-agent awareness through a tripartite prompting mechanism. By fusing large language models with multi-agent simulation, cognitive reasoning, and reinforcement learning, the framework allows agents to learn questioning strategies from experience. Experimental results in a multi-agent simulation environment demonstrate that environmental perception prompts significantly reduce "unfed" events, and the addition of multi-agent awareness further decreases cumulative adverse events by over 60% (p<0.05), confirming the method’s effectiveness and adaptability.
📝 Abstract
Large language model (LLM)-driven AI systems are increasingly important for autonomous decision-making in dynamic and open environments. However, most existing systems rely on predefined tasks and fixed prompts, limiting their ability to autonomously identify what problems should be solved when environmental conditions change. In this paper, we propose a human-simulation-based framework that enables AI systems to autonomously form questions and set tasks by reasoning over their internal states, environmental observations, and interactions with other AI systems. The proposed method treats question formation as a first-class decision process preceding task selection and execution, and integrates internal-driven, environment-aware, and inter-agent-aware prompting scopes to progressively expand cognitive coverage. In addition, the framework supports learning the question-formation process from experience, allowing the system to improve its adaptability and decision quality over time. xperimental results in a multi-agent simulation environment show that environment-aware prompting significantly reduces no-eat events compared with the internal-driven baseline, and inter-agent-aware prompting further reduces cumulative no-eat events by more than 60% over a 20-day simulation, with statistically significant improvements (p<0.05).