🤖 AI Summary
This paper addresses the insufficient evaluation of large language models (LLMs) in public mutual fund investment and the prevalent issue of information leakage in existing backtesting methodologies. To this end, we propose the first dynamic forward-evaluation paradigm tailored to asset management scenarios. Methodologically, we design a forward-testing platform simulating live trading environments, integrating a multi-agent collaborative architecture—where LLMs serve concurrently as financial analysts and portfolio managers—alongside an LLM-powered financial reasoning module, a real-time market data simulation engine, and an interactive visualization interface; all evaluations are conducted strictly on post-publication market data via rolling windows. Our core contribution is a novel forward-testing framework that rigorously prevents historical information leakage, enabling cross-market-cycle strategy comparability. This significantly enhances the authenticity, dynamic adaptability, and practical credibility of LLM investment capability assessment, establishing a verifiable evaluation infrastructure for LLM deployment in asset management.
📝 Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities across various domains, but their effectiveness in financial decision making, particularly in fund investment, remains inadequately evaluated. Current benchmarks primarily assess LLMs understanding of financial documents rather than their ability to manage assets or analyze trading opportunities in dynamic market conditions. A critical limitation in existing evaluation methodologies is the backtesting approach, which suffers from information leakage when LLMs are evaluated on historical data they may have encountered during pretraining. This paper introduces DeepFund, a comprehensive platform for evaluating LLM based trading strategies in a simulated live environment. Our approach implements a multi agent framework where LLMs serve as both analysts and managers, creating a realistic simulation of investment decision making. The platform employs a forward testing methodology that mitigates information leakage by evaluating models on market data released after their training cutoff dates. We provide a web interface that visualizes model performance across different market conditions and investment parameters, enabling detailed comparative analysis. Through DeepFund, we aim to provide a more accurate and fair assessment of LLMs capabilities in fund investment, offering insights into their potential real world applications in financial markets.