🤖 AI Summary
This paper identifies a “profit illusion” in large language model (LLM)-based financial agents: high backtest returns stem from implicit information leakage within the model’s knowledge window, causing severe out-of-sample performance degradation. To address this, we propose FactFin—a causal-aware decision-making framework integrating four components: strategy code generation, retrieval-augmented generation, Monte Carlo tree search, and counterfactual simulation. By applying counterfactual perturbations, FactFin suppresses outcome memorization and steers the model toward learning causally grounded mechanisms. We further introduce FinLake-Bench—the first leakage-resistant benchmark for financial LLMs—that rigorously evaluates generalization under zero prior financial knowledge. Experiments demonstrate that FactFin significantly outperforms mainstream baselines in out-of-sample risk-adjusted returns (e.g., Sharpe ratio), validating the critical role of causal modeling in enhancing the robustness of LLM-driven financial decision-making.
📝 Abstract
LLM-based financial agents have attracted widespread excitement for their ability to trade like human experts. However, most systems exhibit a "profit mirage": dazzling back-tested returns evaporate once the model's knowledge window ends, because of the inherent information leakage in LLMs. In this paper, we systematically quantify this leakage issue across four dimensions and release FinLake-Bench, a leakage-robust evaluation benchmark. Furthermore, to mitigate this issue, we introduce FactFin, a framework that applies counterfactual perturbations to compel LLM-based agents to learn causal drivers instead of memorized outcomes. FactFin integrates four core components: Strategy Code Generator, Retrieval-Augmented Generation, Monte Carlo Tree Search, and Counterfactual Simulator. Extensive experiments show that our method surpasses all baselines in out-of-sample generalization, delivering superior risk-adjusted performance.