RetailBench: Evaluating Long-Horizon Autonomous Decision-Making and Strategy Stability of LLM Agents in Realistic Retail Environments

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language model (LLM) agents struggle to achieve stable and interpretable long-term autonomous decision-making in real-world, dynamic retail environments. To address this challenge, this work proposes RetailBench—a high-fidelity retail simulation benchmark—and introduces a hierarchical evolutionary framework that decouples strategic planning from execution. In this architecture, high-level policies evolve adaptively on an independent timescale, while low-level actions handle concrete operational tasks. This separation substantially enhances both the stability and interpretability of agent decisions under non-stationary conditions. Experiments across eight mainstream LLMs demonstrate that the proposed framework effectively improves operational efficiency, while also revealing fundamental limitations of current models in handling complex, long-horizon, multi-factor decision-making scenarios.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM)-based agents have achieved notable success on short-horizon and highly structured tasks. However, their ability to maintain coherent decision-making over long horizons in realistic and dynamic environments remains an open challenge. We introduce RetailBench, a high-fidelity benchmark designed to evaluate long-horizon autonomous decision-making in realistic commercial scenarios, where agents must operate under stochastic demand and evolving external conditions. We further propose the Evolving Strategy & Execution framework, which separates high-level strategic reasoning from low-level action execution. This design enables adaptive and interpretable strategy evolution over time. It is particularly important for long-horizon tasks, where non-stationary environments and error accumulation require strategies to be revised at a different temporal scale than action execution. Experiments on eight state-of-the-art LLMs across progressively challenging environments show that our framework improves operational stability and efficiency compared to other baselines. However, performance degrades substantially as task complexity increases, revealing fundamental limitations in current LLMs for long-horizon, multi-factor decision-making.
Problem

Research questions and friction points this paper is trying to address.

long-horizon decision-making
strategy stability
LLM agents
realistic retail environments
autonomous decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

RetailBench
long-horizon decision-making
strategy evolution
LLM agents
strategy-execution decoupling