🤖 AI Summary
This study investigates how AI involvement influences users’ trust in human experts, AI systems, and their collaborative combinations in expertise-dependent decision-making contexts. Through a controlled user experiment (N=77) simulating an academic course-planning task, the research compares users’ trust in and perceived expertise of expert recommendations across different human-AI collaboration paradigms. The findings reveal that user trust is not solely determined by recommendation accuracy but is significantly shaped by how experts integrate AI into their workflow. Specifically, the transparency of expert-AI collaboration directly influences perceptions of the expert’s professional credibility. These results underscore the critical role of “process visibility” in human-AI team design and offer a novel perspective for developing trustworthy hybrid intelligence systems.
📝 Abstract
The increasing integration of AI-powered tools into expert workflows, such as medicine, law, and finance, raises a critical question: how does AI involvement influence a user's trust in the human expert, the AI system, and their combination? To investigate this, we conducted a user study (N=77) featuring a simulated course-planning task. We compared various conditions that differed in both the presence of AI and the specific mode of human-AI collaboration. Our results indicate that while the advisor's ability to create a correct schedule is important, the user's perception of expertise and trust is also influenced by how the expert utilized the AI assistant. These findings raise important considerations for the design of human-AI hybrid teams, particularly when the adoption of recommendations depends on the end-user's perception of the recommender's expertise.