🤖 AI Summary
To address delayed risk factor identification, challenges in modeling nonlinear volatility dynamics, and poor cross-market generalizability in high-frequency trading (HFT), this paper proposes an end-to-end joint optimization framework. The method introduces hierarchical proximal policy optimization (Hierarchical PPO) coupled with a transferable option (TO) mechanism, integrated with genetic operators (sin/+/**//) to automatically discover intraday risk factors from raw HFT data. A Pearson correlation–driven reward function enables co-training of factor generation and quality assessment. The framework supports cross-market strategy transfer using historical HFT data. Experiments on high-frequency index data from China, India, and the U.S. demonstrate that our approach achieves a 25% improvement in excess returns over baseline models, significantly enhances volatility prediction accuracy, and yields risk factors with superior interpretability and robustness.
📝 Abstract
Traditional manually designed risk factors, such as beta, size/value, and momentum, often lag behind market dynamics when measuring and predicting volatility in stock returns. Furthermore, statistical models, such as principal component analysis (PCA) and factor analysis frequently fail to capture hidden nonlinear relationships. While genetic programming (GP) has advanced in identifying nonlinear factors automatically, it often lacks an internal mechanism for evaluating factor quality, and the resulting formulas are typically too complex. To address these challenges, we propose a Hierarchical Proximal Policy Optimization (HPPO) framework for automated factor generation and evaluation. The framework leverages two PPO models: a high-level policy and a low-level policy. The high-level policy learns and assigns weights to stock features, while the low-level policy identifies latent nonlinear relationships by combining operators such as $mathit{sin}()$, $mathit{+}$, $mathit{**}$, and $mathit{/}$. The Pearson correlation coefficient between the generated risk factors and realized return volatility serves as the reward signal, quantifying factor efficacy. Additionally, we incorporate transfer learning into HPPO by pre-training the high-level policy on large-scale historical data from the same High-Frequency Trading (HFT) market. The policy is then fine-tuned with the latest data to account for newly emerging features and distribution shifts. This Transferred Option (TO) enables the high-level policy to leverage previously learned feature correlations across different market environments, resulting in faster convergence and higher-quality factor generation. Experimental results demonstrate that, compared to baselines, the HPPO-TO algorithm achieves a 25% excess return in HFT markets across China (CSI 300 Index/CSI 800 Index), India (Nifty 100 Index), and the United States (S&P 500 Index).