🤖 AI Summary
This study systematically evaluates the trustworthiness of nine leading closed- and open-source large language models (LLMs) in investment risk preference assessment, uncovering significant geographic and gender biases. Method: Leveraging a structured dataset of 1,720 user profiles—spanning 10 countries and both genders, each annotated with 16 risk-related attributes—we conduct multi-model comparative experiments, hierarchical sensitivity quantification, and risk-score distribution analysis. Contribution/Results: We首次 identify opposing group-sensitivity patterns between GPT-4o and LLaMA 3.1; all models fail to maintain consistent performance across geographic and demographic dimensions. Only GPT-4o and LLaMA 3.1 approximate human-expected risk scoring in low- and medium-risk ranges. We propose a standardized, regulatory-grade AI trustworthiness evaluation framework for financial applications, offering both methodological foundations and empirical evidence to mitigate deployment risks—including bias, opacity, and unreliability—in real-world investment advisory systems.
📝 Abstract
We evaluate the credibility of leading AI models in assessing investment risk appetite. Our analysis spans proprietary (GPT-4, Claude 3.7, Gemini 1.5) and open-weight models (LLaMA 3.1/3.3, DeepSeek-V3, Mistral-small), using 1,720 user profiles constructed with 16 risk-relevant features across 10 countries and both genders. We observe significant variance across models in score distributions and demographic sensitivity. For example, GPT-4o assigns higher risk scores to Nigerian and Indonesian profiles, while LLaMA and DeepSeek show opposite gender tendencies in risk classification. While some models (e.g., GPT-4o, LLaMA 3.1) align closely with expected scores in low- and mid-risk ranges, none maintain consistent performance across regions and demographics. Our findings highlight the need for rigorous, standardized evaluations of AI systems in regulated financial contexts to prevent bias, opacity, and inconsistency in real-world deployment.