๐ค AI Summary
Reinforcement learning (RL) for financial trading faces critical bottlenecksโlow-efficiency market simulation impedes effective integration of large language model (LLM)-generated textual signals and exacerbates sampling constraints.
Method: We propose a GPU-accelerated parallel market simulation framework, the first RL environment enabling real-time ingestion of LLM-derived financial semantic signals alongside structured market data. Leveraging CUDA kernel optimization and batched environment encapsulation, it supports thousands of concurrent simulations, overcoming CPU-bound throughput limitations.
Contribution/Results: The framework achieves up to 100ร training speedup over conventional CPU-based simulators. It has served as the foundational infrastructure for FinRL Contests 2023โ2025, enabling over 100 participating teams to efficiently train multi-asset (equities and cryptocurrencies) trading agents. By unifying high-fidelity market dynamics with real-time LLM signals at scale, it establishes a scalable, high-throughput infrastructure paradigm for LLM-augmented RL in finance.
๐ Abstract
Reinforcement learning has shown great potential in finance. We have organized the FinRL Contests 2023-2025 featuring different financial tasks. Large language models have a strong capability to process financial texts. Integrating LLM-generated signals into FinRL is a new task, enabling agents to use both structured market data and unstructured financial text. To address the sampling bottleneck during training, we introduce GPU-based parallel market environments to improve sampling speed. In this paper, we summarize the parallel market environments used in FinRL Contests 2023-2025. Two new environments incorporate LLM-generated signals and support massively parallel simulation. Contestants utilize these environments to train agents for stock and cryptocurrency trading tasks.