🤖 AI Summary
This study investigates how ethical alignment—specifically harmlessness, helpfulness, and honesty—affects risk preferences in large language models (LLMs) acting as AI decision-makers. Method: Leveraging 30 mainstream LLMs, we integrate behavioral economics experiments, quantitative risk preference assessment, and investment prediction benchmarking. Contribution/Results: We systematically demonstrate for the first time that ethical alignment significantly increases LLM risk aversion, with a nonlinear relationship between alignment strength and risk avoidance; excessive alignment induces severe underinvestment, raising average prediction error by 42%. The findings expose a fundamental tension between ethical alignment and economically valuable risk-taking, introducing a novel “alignment–domain adaptability” trade-off paradigm. Furthermore, we empirically identify the critical threshold at which alignment begins to degrade task performance—revealing a precise alignment–performance trade-off point.
📝 Abstract
This study examines the risk preferences of Large Language Models (LLMs) and how aligning them with human ethical standards affects their economic decision-making. Analyzing 30 LLMs reveals a range of inherent risk profiles, from risk-averse to risk-seeking. We find that aligning LLMs with human values, focusing on harmlessness, helpfulness, and honesty, shifts them towards risk aversion. While some alignment improves investment forecast accuracy, excessive alignment leads to overly cautious predictions, potentially resulting in severe underinvestment. Our findings highlight the need for a nuanced approach that balances ethical alignment with the specific requirements of economic domains when using LLMs in finance.