Less or More: Towards Glanceable Explanations for LLM Recommendations Using Ultra-Small Devices

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the problem of overly verbose and unreadable LLM-generated recommendation explanations on ultra-small-screen devices (e.g., smartwatches), this paper proposes a dual-dimensional optimization framework: (1) context-componentized, spatially structured prompting, which decomposes explanations into layout-aware semantic units; and (2) confidence-driven, temporally adaptive presentation, dynamically modulating information granularity and display pacing. Evaluated via a mixed-method user study (quantitative task performance metrics + qualitative interviews), the approach significantly reduces user interaction time and cognitive load. Contrary to expectations, users preferred “always-structured” over “adaptively-structured” presentation—sacrificing minor detail satisfaction but substantially increasing AI recommendation adoption rates. This work is the first to empirically uncover structural regularities of LLM explanations in ultra-small-screen contexts and to establish design principles for personalized content–timing co-optimization.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown remarkable potential in recommending everyday actions as personal AI assistants, while Explainable AI (XAI) techniques are being increasingly utilized to help users understand why a recommendation is given. Personal AI assistants today are often located on ultra-small devices such as smartwatches, which have limited screen space. The verbosity of LLM-generated explanations, however, makes it challenging to deliver glanceable LLM explanations on such ultra-small devices. To address this, we explored 1) spatially structuring an LLM's explanation text using defined contextual components during prompting and 2) presenting temporally adaptive explanations to users based on confidence levels. We conducted a user study to understand how these approaches impacted user experiences when interacting with LLM recommendations and explanations on ultra-small devices. The results showed that structured explanations reduced users' time to action and cognitive load when reading an explanation. Always-on structured explanations increased users' acceptance of AI recommendations. However, users were less satisfied with structured explanations compared to unstructured ones due to their lack of sufficient, readable details. Additionally, adaptively presenting structured explanations was less effective at improving user perceptions of the AI compared to the always-on structured explanations. Together with users' interview feedback, the results led to design implications to be mindful of when personalizing the content and timing of LLM explanations that are displayed on ultra-small devices.
Problem

Research questions and friction points this paper is trying to address.

Optimize LLM explanations for ultra-small devices
Spatially structure explanations to reduce cognitive load
Adapt explanations temporally based on confidence levels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spatially structured LLM explanations
Temporally adaptive explanation presentation
User study on ultra-small devices
🔎 Similar Papers
No similar papers found.
Xinru Wang
Xinru Wang
Purdue University
Human-AI interactionexplainable AI
Mengjie Yu
Mengjie Yu
Assistant Professor at UC Berkeley, EECS
Nonlinear photonicsnanophotonicsmid-infrared opticsquantum device
Hannah Nguyen
Hannah Nguyen
Meta Reality Labs, Redmond, Washington, USA
M
Michael Iuzzolino
Meta Reality Labs, Redmond, Washington, USA
T
Tianyi Wang
Meta Reality Labs, Redmond, Washington, USA
P
Peiqi Tang
Meta Reality Labs, Redmond, Washington, USA
N
Natasha Lynova
Meta Reality Labs, Redmond, Washington, USA
Q
Quoc Co Tran
Meta Reality Labs, Redmond, Washington, USA
T
Ting Zhang
Meta Reality Labs, Redmond, Washington, USA
N
Naveen Sendhilnathan
Meta Reality Labs, Redmond, Washington, USA
Hrvoje Benko
Hrvoje Benko
Director, Meta Reality Labs Research, Affiliate Professor at the University of Washington
Augmented RealityHuman-Computer InteractionSensingHapticsNatural User Interfaces
Haijun Xia
Haijun Xia
Assistant Professor, University of California, San Diego
Human-AI CollaborationInteraction Paradigms
T
Tanya Jonker
Meta Reality Labs, Redmond, Washington, USA