🤖 AI Summary
General-purpose large language models and agents often lack the specialized financial knowledge required for complex financial reasoning tasks. To address this gap, this work proposes the first comprehensive evaluation framework for financial AI systems, spanning three phases: exploration, readiness, and governance. The framework underpins an open and standardized benchmarking platform for FinLLMs and FinAgents, integrating an evaluation pipeline, governance framework, leaderboard, AgentOps infrastructure, and documentation website. Developed in collaboration with the Linux Foundation, PyTorch Foundation, Hugging Face, and Red Hat, the platform enables researchers and practitioners to efficiently evaluate financial language models and agents, significantly enhancing the robustness and reliability of financial AI systems.
📝 Abstract
Over the past three years, the financial services industry has witnessed Large Language Models (LLMs) and agents transitioning from the exploration stage to readiness and governance stages. Financial large language models (FinLLMs), such as open FinGPT and proprietary BloombergGPT , have great potential in financial applications, including retrieving real-time data, tutoring, analyzing sentiment of social media, analyzing SEC filings, and agentic trading. However, general-purpose LLMs and agents lack financial expertise and often struggle to handle complex financial reasoning. This paper presents an evaluation and benchmarking suite that covers the lifecycle of FinLLMs and FinAgents. This suite led by SecureFinAI Lab includes an evaluation pipeline and a governance framework collaborating with Linux Foundation and PyTorch Foundation, a FinLLM Leaderboard with HuggingFace, an AgentOps framework with Red Hat, and a documentation website with Rensselear Center of Open Source. Our collaborative development evolves through three stages: FinLLM Exploration (2023), FinLLM Readiness (2024), and FinAI Governance (2025). The proposed suite serves as an open platform that enables researchers and practitioners to perform both quantitative and qualitative analysis of different FinLLMs and FinAgents, fostering a more robust and reliable FinAI ecosystem.