Learning from Expert Factors: Trajectory-level Reward Shaping for Formulaic Alpha Mining

πŸ“… 2025-07-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In formulaic alpha factor mining, reinforcement learning faces challenges including sparse rewards, an exponentially large symbolic search space, and training instability. To address these, this paper proposes a trajectory-level reward shaping framework: (i) dense trajectory feedback is constructed via subsequence similarity to expert formulas; (ii) a potential-based reward shaping function guides the search direction; and (iii) reward centering is applied to reduce variance. Additionally, the feature-dimensional time complexity is optimized from linear to constant. The method significantly improves exploration efficiency and training stability. Experiments across six major equity index datasets demonstrate that, compared to baseline methods, the proposed approach achieves a 9.29% improvement in information coefficient ranking, along with substantial gains in computational efficiency and factor predictive power.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement learning (RL) has successfully automated the complex process of mining formulaic alpha factors, for creating interpretable and profitable investment strategies. However, existing methods are hampered by the sparse rewards given the underlying Markov Decision Process. This inefficiency limits the exploration of the vast symbolic search space and destabilizes the training process. To address this, Trajectory-level Reward Shaping (TLRS), a novel reward shaping method, is proposed. TLRS provides dense, intermediate rewards by measuring the subsequence-level similarity between partially generated expressions and a set of expert-designed formulas. Furthermore, a reward centering mechanism is introduced to reduce training variance. Extensive experiments on six major Chinese and U.S. stock indices show that TLRS significantly improves the predictive power of mined factors, boosting the Rank Information Coefficient by 9.29% over existing potential-based shaping algorithms. Notably, TLRS achieves a major leap in computational efficiency by reducing its time complexity with respect to the feature dimension from linear to constant, which is a significant improvement over distance-based baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses sparse rewards in RL for alpha factor mining
Improves exploration in symbolic search space for investment strategies
Enhances computational efficiency and predictive power of mined factors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trajectory-level Reward Shaping for dense rewards
Reward centering mechanism reduces training variance
Constant time complexity in feature dimension
πŸ”Ž Similar Papers
No similar papers found.
Junjie Zhao
Junjie Zhao
εŒ—δΊ¬ε€§ε­¦η‘•ε£«η”Ÿ
CVML
C
Chengxi Zhang
Harvard University, Cambridge, MA 02138, USA
C
Chenkai Wang
Southern University of Science and Technology, Shenzhen 518055, China
P
Peng Yang
Southern University of Science and Technology, Shenzhen 518055, China