Formal verification for robo-advisors: Irrelevant for subjective end-user trust, yet decisive for investment behavior?

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how formal verification and third-party certification influence user trust and investment behavior in robo-advisory systems. Method: An online scenario-based experiment (N=420) manipulated three factors: certification cues, formal verification statements, and outcome feedback (success/failure), measuring both subjective trust and actual monetary investments (inherited funds). Contribution/Results: Neither assurance mechanism significantly increased subjective trust; however, formal verification markedly improved users’ comprehension of system logic and led to higher investment amounts—the median investment in the formal verification condition was €65,000, significantly exceeding the €50,000 median in the control group. These findings underscore that objective behavioral metrics—rather than self-reported trust—more reliably capture the real-world impact of AI quality assurance mechanisms. The study thus provides methodological insights and empirical evidence for research on trustworthy human–AI interaction, highlighting the distinct roles of cognitive understanding and attitudinal trust in shaping user behavior toward verified AI systems.

Technology Category

Application Category

📝 Abstract
This online-vignette study investigates the impact of certification and verification as measures for quality assurance of AI on trust and use of a robo-advisor. Confronting 520 participants with an imaginary situation where they were using an online banking service to invest their inherited money, we formed 4 experimental groups. EG1 achieved no further information of their robo-advisor, while EG2 was informed that their robo-advisor was certified by a reliable agency for unbiased processes, and EG3 was presented with a formally verified robo-advisor that was proven to consider their investment preferences. A control group was presented a remote certified human financial advisor. All groups had to decide on how much of their 10,000 euros they would give to their advisor to autonomously invest for them and report on trust and perceived dependability. A second manipulation happened afterwards, confronting participants with either a successful or failed investment. Overall, our results show that the level of quality assurance of the advisor had surprisingly near to no effect of any of our outcome variables, except for people's perception of their own mental model of the advisor. Descriptively, differences between investments show that seem to favor a verified advisor with a median investment of 65,000 euros (vs. 50,000). Success or failure information, though influences only partially by advisor quality, has been perceived as a more important clue for advisor trustworthiness, leading to substantially different trust and dependability ratings. The study shows the importance of thoroughly investigating not only trust, but also trusting behavior with objective measures. It also underlines the need for future research on formal verification, that might be the gold standard in proving AI mathematically, but seems not to take full effect as a cue for trustworthiness for end-users.
Problem

Research questions and friction points this paper is trying to address.

Examining formal verification's impact on robo-advisor trust and investment behavior
Investigating how certification affects user trust in AI financial advisors
Assessing whether mathematical proofs influence subjective user trust perceptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online-vignette study with 520 participants
Four experimental groups with different advisor assurances
Formal verification showed minimal effect on trust
🔎 Similar Papers
No similar papers found.
A
Alina Tausch
Ruhr University Bochum, Universitätsstraße 150, 44801 Bochum, Germany; Witten/Herdecke University, Alfred-Herrhausen-Straße 50, 58455 Witten, Germany
Magdalena Wischnewski
Magdalena Wischnewski
Post-Doc at the Research Center for Trustworthy Data Science and Security
AItrust calibrationautomated journalismmotivated reasoningmisinformation
M
Mustafa Yalciner
Research Center Trustworthy Data Science and Security, TU Dortmund University, Joseph-von-Fraunhofer-Straße 25, Dortmund, Germany
Daniel Neider
Daniel Neider
TU Dortmund University and Center for Trustworthy Data Science and Security
Formal MethodsMachine LearningLogicArtificial Intelligence