🤖 AI Summary
In financial applications, severe output drift in large language models (LLMs) undermines auditability and regulatory compliance. Method: We propose the first systematic verification and mitigation framework for financial determinism, featuring a three-tier model taxonomy, a cross-vendor output consistency verification mechanism, and an audit-ready certification system. Key techniques include greedy decoding (temperature = 0.0), fixed random seeds, SEC 10-K structure-aware retrieval ranking, and financial-domain calibration-invariance checks for RAG, JSON, and SQL outputs. Contribution/Results: Evaluated across five model families and 480 experiments, our approach achieves 100% output consistency on smaller models, with structured-task stability significantly outperforming larger models—challenging the “larger-is-better” paradigm. The framework fully aligns with international regulatory standards—including those of the FSB, BIS, and CFTC—providing a verifiable, auditable technical pathway for trustworthy financial AI deployment.
📝 Abstract
Financial institutions deploy Large Language Models (LLMs) for reconciliations, regulatory reporting, and client communications, but nondeterministic outputs (output drift) undermine auditability and trust. We quantify drift across five model architectures (7B-120B parameters) on regulated financial tasks, revealing a stark inverse relationship: smaller models (Granite-3-8B, Qwen2.5-7B) achieve 100% output consistency at T=0.0, while GPT-OSS-120B exhibits only 12.5% consistency (95% CI: 3.5-36.0%) regardless of configuration (p<0.0001, Fisher's exact test). This finding challenges conventional assumptions that larger models are universally superior for production deployment. Our contributions include: (i) a finance-calibrated deterministic test harness combining greedy decoding (T=0.0), fixed seeds, and SEC 10-K structure-aware retrieval ordering; (ii) task-specific invariant checking for RAG, JSON, and SQL outputs using finance-calibrated materiality thresholds (plus or minus 5%) and SEC citation validation; (iii) a three-tier model classification system enabling risk-appropriate deployment decisions; and (iv) an audit-ready attestation system with dual-provider validation. We evaluated five models (Qwen2.5-7B via Ollama, Granite-3-8B via IBM watsonx.ai, Llama-3.3-70B, Mistral-Medium-2505, and GPT-OSS-120B) across three regulated financial tasks. Across 480 runs (n=16 per condition), structured tasks (SQL) remain stable even at T=0.2, while RAG tasks show drift (25-75%), revealing task-dependent sensitivity. Cross-provider validation confirms deterministic behavior transfers between local and cloud deployments. We map our framework to Financial Stability Board (FSB), Bank for International Settlements (BIS), and Commodity Futures Trading Commission (CFTC) requirements, demonstrating practical pathways for compliance-ready AI deployments.