A New DAPO Algorithm for Stock Trading

📅 2025-05-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost and weak semantic understanding inherent in reinforcement learning (RL) for financial trading. Methodologically, it introduces an RL–large language model (LLM) collaborative trading agent: (i) the first adaptation of the DAPO framework to financial trading; (ii) integration of an enhanced Group Relative Policy Optimization (GRPO) with dynamic sampling; and (iii) fine-tuned LLMs for parsing financial news to generate risk and sentiment signals, thereby augmenting semantic policy awareness. Contributions include a lightweight, efficient RL–LLM joint architecture achieving 230.49% cumulative return and an information ratio of 0.37 on the NASDAQ-100 (FNSPID) benchmark—substantially outperforming the CPPO-DeepSeek baseline. Moreover, training time is reduced by 69%, and memory consumption is significantly lowered, jointly optimizing strategy performance, training efficiency, and robustness.

Technology Category

Application Category

📝 Abstract
Recent advances in reinforcement learning, such as Dynamic Sampling Policy Optimization (DAPO), show strong performance when paired with large language models (LLMs). Motivated by this success, we ask whether similar gains can be realized in financial trading. We design a trading agent that combines an improved Group Relative Policy Optimization (GRPO) algorithm, augmented with ideas from DAPO, with LLM-based risk and sentiment signals extracted from financial news. On the NASDAQ-100 index (FNSPID dataset), our agent attains a cumulative return of 230.49 percent and an information ratio of 0.37, outperforming the CPPO-DeepSeek baseline. It also cuts training time from about 8 hours to 2.5 hours over 100 epochs while markedly reducing RAM usage. The proposed RL-LLM framework offers a scalable path toward data-efficient trading agents. Code: https://github.com/Ruijian-Zha/FinRL-DAPO-SR/
Problem

Research questions and friction points this paper is trying to address.

Enhancing stock trading performance using improved RL algorithms
Combining LLM-based signals with DAPO-augmented GRPO for finance
Reducing training time and resource usage in trading agents
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines improved GRPO with DAPO enhancements
Integrates LLM-based risk and sentiment signals
Reduces training time and RAM usage significantly
🔎 Similar Papers
No similar papers found.
R
Ruijian Zha
Department of Computer Science, Columbia University
Bojun Liu
Bojun Liu
University of Science and Technology of China
image compressionpoint cloud compression