๐ค AI Summary
This work addresses the vulnerability of large language models to minor state perturbations in high-frequency decision-making tasks and the misalignment between subtask and composite-task policies, which collectively limit performance. To overcome these issues, the authors propose an action reward normalization method that theoretically preserves the optimal policy while integrating a consistency loss mechanism to achieve semantic alignment between local subtask policies and the global strategy. By incorporating dense environmental feedback, normalized reward signals, and joint policy generation, the approach significantly enhances decision accuracy and robustness in high-frequency scenarios. Experimental results demonstrate superior performance over existing methods in tasks such as drone pursuit, with consistent gains on both individual and composite tasks, as well as strong generalization capabilities.
๐ Abstract
While Large Language Models (LLMs) form the cornerstone of sequential decision-making agent development, they have inherent limitations in high-frequency decision tasks. Existing research mainly focuses on discrete embodied decision scenarios with low-frequency and significant semantic differences in state space (e.g., household planning). These methods suffer from limited performance in high-frequency decision-making tasks, since high-precision numerical state information in such tasks undergoes frequent updates with minimal fluctuations, and exhibiting policy misalignment between the learned sub-tasks and composite tasks. To address these issues, this paper proposes Normalized Action Reward guided Consistency Policy Optimization (NAR-CP). 1) Our method first acquires predefined dense rewards from environmental feedback of candidate actions via reward functions, then completes reward shaping through normalization, and theoretically verifies action reward normalization does not impair optimal policy. 2) To reduce policy misalignment in composite tasks, we use LLMs to infer sub-observation candidate actions and generate joint policies, with consistency loss ensuring precise alignment between global semantic policies and sub-semantic policies. Experiments on UAV pursuit, a typical high-frequency task, show our method delivers superior performance on independent and composite tasks with excellent generalization to unseen tasks.