🤖 AI Summary
This study identifies significant knowledge conflicts and cognitive biases in large language models (LLMs) for investment analysis: their pretraining-embedded preferences misalign with real-time market dynamics and institutional objectives, undermining recommendation reliability. We propose the first quantifiable experimental framework to elicit and track LLMs’ implicit investment preferences—employing balanced and imbalanced hypothesis scenarios, preference extraction, and dynamic preference monitoring—to systematically assess systematic biases and rigidity across industry, market capitalization, and momentum factors. We present the first quantitative measurement of confirmation bias in LLMs, revealing pervasive large-cap bias and contrarian strategy inclination across mainstream models—both of which intensify upon exposure to counterevidence. Results demonstrate that LLM investment judgments are predominantly governed by internalized knowledge, exhibiting poor adaptability and severely limiting their credibility and practical utility in financial decision-making.
📝 Abstract
In finance, Large Language Models (LLMs) face frequent knowledge conflicts due to discrepancies between pre-trained parametric knowledge and real-time market data. These conflicts become particularly problematic when LLMs are deployed in real-world investment services, where misalignment between a model's embedded preferences and those of the financial institution can lead to unreliable recommendations. Yet little research has examined what investment views LLMs actually hold. We propose an experimental framework to investigate such conflicts, offering the first quantitative analysis of confirmation bias in LLM-based investment analysis. Using hypothetical scenarios with balanced and imbalanced arguments, we extract models' latent preferences and measure their persistence. Focusing on sector, size, and momentum, our analysis reveals distinct, model-specific tendencies. In particular, we observe a consistent preference for large-cap stocks and contrarian strategies across most models. These preferences often harden into confirmation bias, with models clinging to initial judgments despite counter-evidence.