Are Generative AI Agents Effective Personalized Financial Advisors?

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the efficacy and boundaries of LLM-driven generative AI agents serving as personalized financial advisors in high-stakes financial domains, addressing three core challenges: proactive user preference elicitation, adaptive alignment to heterogeneous investment goals, and the impact of anthropomorphic interaction on trust formation. Through the first controlled user experiment—employing multi-role conversational agents, a structured preference elicitation framework, and persona-based prompting (e.g., extraverted persona)—we find: (1) AI agents achieve human-level performance in preference acquisition; (2) an extraverted persona significantly enhances affective trust but degrades recommendation quality, revealing a structural decoupling between trust and competence; and (3) user satisfaction inversely correlates with recommendation quality, exposing a critical “trust–capability misalignment” risk in high-stakes settings. These findings provide empirical grounding and actionable design principles for the trustworthy deployment of AI agents in professional financial services.

Technology Category

Application Category

📝 Abstract
Large language model-based agents are becoming increasingly popular as a low-cost mechanism to provide personalized, conversational advice, and have demonstrated impressive capabilities in relatively simple scenarios, such as movie recommendations. But how do these agents perform in complex high-stakes domains, where domain expertise is essential and mistakes carry substantial risk? This paper investigates the effectiveness of LLM-advisors in the finance domain, focusing on three distinct challenges: (1) eliciting user preferences when users themselves may be unsure of their needs, (2) providing personalized guidance for diverse investment preferences, and (3) leveraging advisor personality to build relationships and foster trust. Via a lab-based user study with 64 participants, we show that LLM-advisors often match human advisor performance when eliciting preferences, although they can struggle to resolve conflicting user needs. When providing personalized advice, the LLM was able to positively influence user behavior, but demonstrated clear failure modes. Our results show that accurate preference elicitation is key, otherwise, the LLM-advisor has little impact, or can even direct the investor toward unsuitable assets. More worryingly, users appear insensitive to the quality of advice being given, or worse these can have an inverse relationship. Indeed, users reported a preference for and increased satisfaction as well as emotional trust with LLMs adopting an extroverted persona, even though those agents provided worse advice.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM-advisors in high-stakes financial advice scenarios
Assessing preference elicitation accuracy for uncertain user needs
Measuring impact of advisor personality on trust and advice quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-advisors elicit user preferences effectively
Personalized advice influences investor behavior positively
Extroverted personas increase user satisfaction and trust
🔎 Similar Papers
No similar papers found.