🤖 AI Summary
To address the challenge of human-AI collaborative decision-making in complex, dynamic real-time strategy (RTS) environments, this paper proposes a tightly coupled large language model (LLM) and behavior tree (BT) framework for human-AI co-play in *StarCraft II*. The framework employs a natural language interface—integrating speech recognition/synthesis and semantic instruction parsing—to enable low-latency, interpretable tactical modulation, allowing human players, especially novices and users with disabilities, to dynamically adjust strategies. Its key innovation lies in embedding the LLM directly into the BT execution layer, overcoming the rigidity of conventional RTS AI policies. User studies demonstrate a 37% improvement in decision speed, a 52% increase in strategic adaptability, a novice win rate 2.3× higher than the baseline, and a 91% task completion rate among users with disabilities.
📝 Abstract
We present Adaptive Command, a novel framework integrating large language models (LLMs) with behavior trees for real-time strategic decision-making in StarCraft II. Our system focuses on enhancing human-AI collaboration in complex, dynamic environments through natural language interactions. The framework comprises: (1) an LLM-based strategic advisor, (2) a behavior tree for action execution, and (3) a natural language interface with speech capabilities. User studies demonstrate significant improvements in player decision-making and strategic adaptability, particularly benefiting novice players and those with disabilities. This work contributes to the field of real-time human-AI collaborative decision-making, offering insights applicable beyond RTS games to various complex decision-making scenarios.