Missing Pieces: How Do Designs that Expose Uncertainty Longitudinally Impact Trust in AI Decision Aids? An In Situ Study of Gig Drivers

📅 2024-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how uncertainty visualization influences users’ long-term trust in AI decision-support tools, focusing on ride-hailing drivers’ evolving trust in AI-driven shift scheduling recommendations within the gig economy. Drawing on a longitudinal mixed-methods field study with 51 active drivers, we identify a two-stage trust formation process: an initial impression phase—dominated by perceived accuracy—and a sustained interaction phase—moderated by the mode of uncertainty communication. We propose the design principle of “task-aligned uncertainty expression,” operationalized via probabilistic intervals and confidence prompts, to enhance trust stability and decision integration. Results demonstrate that this approach significantly improves trust resilience; driver experience and risk preference further moderate trust trajectories. The study contributes a context-sensitive design framework and empirical evidence for fostering trustworthy human-AI collaboration in dynamic, high-stakes operational settings.

Technology Category

Application Category

📝 Abstract
Decision aids based on artificial intelligence (AI) induce a wide range of outcomes when they are deployed in uncertain environments. In this paper, we investigate how users' trust in recommendations from an AI decision aid is impacted over time by designs that expose uncertainty in predicted outcomes. Unlike previous work, we focus on gig driving - a real-world, repeated decision-making context. We report on a longitudinal mixed-methods study ($n=51$) where we measured gig drivers' trust as they interacted with an AI-based schedule recommendation tool. Our results show that participants' trust in the tool was shaped by both their first impressions of its accuracy and their longitudinal interactions with it; and that task-aligned framings of uncertainty improved trust by allowing participants to incorporate uncertainty into their decision-making processes. Additionally, we observed that trust depended on their characteristics as drivers, underscoring the need for more in situ studies of AI decision aids.
Problem

Research questions and friction points this paper is trying to address.

Impact of uncertainty-exposing designs on trust in AI aids
Longitudinal trust dynamics in gig drivers' AI tool usage
Role of driver characteristics in AI decision aid trust
Innovation

Methods, ideas, or system contributions that make the work stand out.

Longitudinal study on gig drivers' trust
Task-aligned uncertainty framing improves trust
In situ AI decision aid evaluation
🔎 Similar Papers
No similar papers found.
Rex Chen
Rex Chen
Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
Ruiyi Wang
Ruiyi Wang
University of California, San Diego
Computer Vision
N
Norman M. Sadeh
Carnegie Mellon University, Pittsburgh, Pennsylvania, USA
F
Fei Fang
Carnegie Mellon University, Pittsburgh, Pennsylvania, USA