Do It For Me vs. Do It With Me: Investigating User Perceptions of Different Paradigms of Automation in Copilots for Feature-Rich Software

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Automating in-app assistants powered by large language models (LLMs) in feature-rich software poses challenges in balancing automation with user control, learnability, and task adaptability. Method: This paper comparatively evaluates fully automated (AutoCopilot) versus semi-automated, guidance-driven (GuidedCopilot) paradigms, introducing a task- and interface-state-aware enhancement mechanism—comprising contextual preview snippets and adaptive instruction generation—and implementing a dual-paradigm prototype integrating visual step-by-step guidance with dynamic instruction execution. Contribution/Results: A user study (N=20) provides the first empirical evidence that GuidedCopilot significantly outperforms AutoCopilot in exploratory and creative tasks, achieving superior outcomes across perceived control, practical utility, and learnability. A follow-up design exploration (N=10) confirms that the proposed enhancements further improve interaction depth and contextual adaptability.

Technology Category

Application Category

📝 Abstract
Large Language Model (LLM)-based in-application assistants, or copilots, can automate software tasks, but users often prefer learning by doing, raising questions about the optimal level of automation for an effective user experience. We investigated two automation paradigms by designing and implementing a fully automated copilot (AutoCopilot) and a semi-automated copilot (GuidedCopilot) that automates trivial steps while offering step-by-step visual guidance. In a user study (N=20) across data analysis and visual design tasks, GuidedCopilot outperformed AutoCopilot in user control, software utility, and learnability, especially for exploratory and creative tasks, while AutoCopilot saved time for simpler visual tasks. A follow-up design exploration (N=10) enhanced GuidedCopilot with task-and state-aware features, including in-context preview clips and adaptive instructions. Our findings highlight the critical role of user control and tailored guidance in designing the next generation of copilots that enhance productivity, support diverse skill levels, and foster deeper software engagement.
Problem

Research questions and friction points this paper is trying to address.

Investigating optimal automation level for user experience in copilots
Comparing fully automated vs semi-automated copilot paradigms
Enhancing user control and learnability in feature-rich software assistants
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fully automated vs. semi-automated copilot paradigms
Step-by-step visual guidance for user control
Task-aware features with adaptive instructions
🔎 Similar Papers
No similar papers found.
Anjali Khurana
Anjali Khurana
Simon Fraser University
Human-AI interactionHuman-Computer Interaction
Xiaotian Su
Xiaotian Su
PhD, ETH Zürich
HCINLPAI in Education
A
April Yi Wang
Computer Science, ETH Zürich, Zürich, Switzerland
P
Parmit K. Chilana
Computing Science, Simon Fraser University, Burnaby, BC, Canada