LLMs for High-Frequency Decision-Making: Normalized Action Reward-Guided Consistency Policy Optimization

๐Ÿ“… 2026-03-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the vulnerability of large language models to minor state perturbations in high-frequency decision-making tasks and the misalignment between subtask and composite-task policies, which collectively limit performance. To overcome these issues, the authors propose an action reward normalization method that theoretically preserves the optimal policy while integrating a consistency loss mechanism to achieve semantic alignment between local subtask policies and the global strategy. By incorporating dense environmental feedback, normalized reward signals, and joint policy generation, the approach significantly enhances decision accuracy and robustness in high-frequency scenarios. Experimental results demonstrate superior performance over existing methods in tasks such as drone pursuit, with consistent gains on both individual and composite tasks, as well as strong generalization capabilities.

Technology Category

Application Category

๐Ÿ“ Abstract
While Large Language Models (LLMs) form the cornerstone of sequential decision-making agent development, they have inherent limitations in high-frequency decision tasks. Existing research mainly focuses on discrete embodied decision scenarios with low-frequency and significant semantic differences in state space (e.g., household planning). These methods suffer from limited performance in high-frequency decision-making tasks, since high-precision numerical state information in such tasks undergoes frequent updates with minimal fluctuations, and exhibiting policy misalignment between the learned sub-tasks and composite tasks. To address these issues, this paper proposes Normalized Action Reward guided Consistency Policy Optimization (NAR-CP). 1) Our method first acquires predefined dense rewards from environmental feedback of candidate actions via reward functions, then completes reward shaping through normalization, and theoretically verifies action reward normalization does not impair optimal policy. 2) To reduce policy misalignment in composite tasks, we use LLMs to infer sub-observation candidate actions and generate joint policies, with consistency loss ensuring precise alignment between global semantic policies and sub-semantic policies. Experiments on UAV pursuit, a typical high-frequency task, show our method delivers superior performance on independent and composite tasks with excellent generalization to unseen tasks.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
High-Frequency Decision-Making
Policy Misalignment
Composite Tasks
Normalized Action Reward
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
High-Frequency Decision-Making
Reward Normalization
Consistency Policy Optimization
Policy Alignment
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yang Zhao
School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xiโ€™an 710072, China
Zihao Li
Zihao Li
China University of Geoscience, Wuhan
Computer VisionRemote SensingDeep Learning
Zhiyu Jiang
Zhiyu Jiang
Professor, University of Agder
Offshore Renewable EnergyMarine StructuresMarine OperationStructural Dynamics
D
Dandan Ma
School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xiโ€™an 710072, China
G
Ganchao Liu
School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xiโ€™an 710072, China
W
Wenzhe Zhao
School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xiโ€™an 710072, China