Evaluating AI for Finance: Is AI Credible at Assessing Investment Risk?

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the trustworthiness of nine leading closed- and open-source large language models (LLMs) in investment risk preference assessment, uncovering significant geographic and gender biases. Method: Leveraging a structured dataset of 1,720 user profiles—spanning 10 countries and both genders, each annotated with 16 risk-related attributes—we conduct multi-model comparative experiments, hierarchical sensitivity quantification, and risk-score distribution analysis. Contribution/Results: We首次 identify opposing group-sensitivity patterns between GPT-4o and LLaMA 3.1; all models fail to maintain consistent performance across geographic and demographic dimensions. Only GPT-4o and LLaMA 3.1 approximate human-expected risk scoring in low- and medium-risk ranges. We propose a standardized, regulatory-grade AI trustworthiness evaluation framework for financial applications, offering both methodological foundations and empirical evidence to mitigate deployment risks—including bias, opacity, and unreliability—in real-world investment advisory systems.

Technology Category

Application Category

📝 Abstract
We evaluate the credibility of leading AI models in assessing investment risk appetite. Our analysis spans proprietary (GPT-4, Claude 3.7, Gemini 1.5) and open-weight models (LLaMA 3.1/3.3, DeepSeek-V3, Mistral-small), using 1,720 user profiles constructed with 16 risk-relevant features across 10 countries and both genders. We observe significant variance across models in score distributions and demographic sensitivity. For example, GPT-4o assigns higher risk scores to Nigerian and Indonesian profiles, while LLaMA and DeepSeek show opposite gender tendencies in risk classification. While some models (e.g., GPT-4o, LLaMA 3.1) align closely with expected scores in low- and mid-risk ranges, none maintain consistent performance across regions and demographics. Our findings highlight the need for rigorous, standardized evaluations of AI systems in regulated financial contexts to prevent bias, opacity, and inconsistency in real-world deployment.
Problem

Research questions and friction points this paper is trying to address.

Assessing AI credibility in evaluating investment risk
Analyzing variance in risk scores across AI models
Identifying demographic biases in AI risk classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates AI models for investment risk assessment
Analyzes 1,720 profiles with 16 risk features
Highlights demographic biases in AI risk scoring
🔎 Similar Papers
No similar papers found.
D
Divij Chawla
Walled AI Labs
A
Ashita Bhutada
Walled AI Labs
D
Do Duc Anh
Walled AI Labs
A
Abhinav Raghunathan
Walled AI Labs
S
SP Vinod
Walled AI Labs
C
Cathy Guo
Walled AI Labs
D
Dar Win Liew
Walled AI Labs
Prannaya Gupta
Prannaya Gupta
Researcher, AETHER by RAiD
ai safetyhuman-ai interactioncognitive modellingcomputer vision
Rishabh Bhardwaj
Rishabh Bhardwaj
Singapore University of Technology and Design
Natural Language ProcessingMachine Learning
R
Rajat Bhardwaj
Walled AI Labs
S
Soujanya Poria
Walled AI Labs