๐ค AI Summary
Traditional rule-based agent-based modeling (ABM) for high-frequency trading (HFT) suffers from a vast, hard-to-calibrate parameter space, while multi-agent reinforcement learning (MARL) offers greater realism and fewer parameters but incurs prohibitive computational costsโhindering its application to fine-grained order-book data. Method: We propose the first GPU-accelerated, open-source MARL framework tailored for HFT, uniquely integrating JAX with the JAX-LOB limit-order-book simulator to enable end-to-end training. It supports large-scale heterogeneous agent simulation, flexible single- or multi-agent configurations, and distributed PPO training. Contribution/Results: Evaluated on one year of real-world market-by-order (MBO) data comprising 400 million events, our framework achieves up to 240ร speedup in training throughput. In two-agent settings, learned execution and market-making agents significantly outperform baseline strategies, establishing an efficient, scalable paradigm for both complex HFT strategy research and empirical validation.
๐ Abstract
Agent-based modelling (ABM) approaches for high-frequency financial markets are difficult to calibrate and validate, partly due to the large parameter space created by defining fixed agent policies. Multi-agent reinforcement learning (MARL) enables more realistic agent behaviour and reduces the number of free parameters, but the heavy computational cost has so far limited research efforts. To address this, we introduce JaxMARL-HFT (JAX-based Multi-Agent Reinforcement Learning for High-Frequency Trading), the first GPU-accelerated open-source multi-agent reinforcement learning environment for high-frequency trading (HFT) on market-by-order (MBO) data. Extending the JaxMARL framework and building on the JAX-LOB implementation, JaxMARL-HFT is designed to handle a heterogeneous set of agents, enabling diverse observation/action spaces and reward functions. It is designed flexibly, so it can also be used for single-agent RL, or extended to act as an ABM with fixed-policy agents. Leveraging JAX enables up to a 240x reduction in end-to-end training time, compared with state-of-the-art reference implementations on the same hardware. This significant speed-up makes it feasible to exploit the large, granular datasets available in high-frequency trading, and to perform the extensive hyperparameter sweeps required for robust and efficient MARL research in trading. We demonstrate the use of JaxMARL-HFT with independent Proximal Policy Optimization (IPPO) for a two-player environment, with an order execution and a market making agent, using one year of LOB data (400 million orders), and show that these agents learn to outperform standard benchmarks. The code for the JaxMARL-HFT framework is available on GitHub.