Plan-Then-Execute: An Empirical Study of User Trust and Team Performance When Using LLM Agents As A Daily Assistant

📅 2025-02-03
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how users trust large language model (LLM) agents in everyday planning-and-execution tasks (e.g., flight booking, credit card payments) and how such trust affects human-AI collaboration efficacy across six real-world scenarios spanning a risk gradient. Method: A large-scale empirical experiment (N=248) employs an LLM-modulo architecture within a human-in-the-loop simulation environment, integrating multimodal behavioral measurements. Contribution/Results: We identify—first empirically—that “plausibly reasonable plans” induce systematic overtrust, with perceived plan credibility emerging as the primary cause of collaboration failure. To address this, we propose a dual-path trust calibration framework grounded in *plan explainability* and *execution controllability*. Results show that high-quality stepwise planning coupled with judicious user intervention significantly improves task success (+32.7%) and trust alignment (r=0.68), providing actionable, deployable insights for designing trustworthy AI assistants.

Technology Category

Application Category

📝 Abstract
Since the explosion in popularity of ChatGPT, large language models (LLMs) have continued to impact our everyday lives. Equipped with external tools that are designed for a specific purpose (e.g., for flight booking or an alarm clock), LLM agents exercise an increasing capability to assist humans in their daily work. Although LLM agents have shown a promising blueprint as daily assistants, there is a limited understanding of how they can provide daily assistance based on planning and sequential decision making capabilities. We draw inspiration from recent work that has highlighted the value of 'LLM-modulo' setups in conjunction with humans-in-the-loop for planning tasks. We conducted an empirical study (N = 248) of LLM agents as daily assistants in six commonly occurring tasks with different levels of risk typically associated with them (e.g., flight ticket booking and credit card payments). To ensure user agency and control over the LLM agent, we adopted LLM agents in a plan-then-execute manner, wherein the agents conducted step-wise planning and step-by-step execution in a simulation environment. We analyzed how user involvement at each stage affects their trust and collaborative team performance. Our findings demonstrate that LLM agents can be a double-edged sword -- (1) they can work well when a high-quality plan and necessary user involvement in execution are available, and (2) users can easily mistrust the LLM agents with plans that seem plausible. We synthesized key insights for using LLM agents as daily assistants to calibrate user trust and achieve better overall task outcomes. Our work has important implications for the future design of daily assistants and human-AI collaboration with LLM agents.
Problem

Research questions and friction points this paper is trying to address.

Trust
Team Collaboration
Language Models in Decision Tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large Language Models
User Trust
Collaborative Efficiency
🔎 Similar Papers
No similar papers found.