ProactiveVA: Proactive Visual Analytics with LLM-Based UI Agent

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM-augmented visual analytics (VA) systems operate reactively, failing to proactively guide users during high cognitive load or disorientation. This work introduces the first proactive UI agent framework for visual analysis, enabling real-time interaction logging monitoring and autonomous user intent recognition via a three-stage agent pipeline—perception, reasoning, and execution—to generate context-aware, timely suggestions. The framework prioritizes intent interpretability, intervention controllability, and contextual sensitivity, grounded in empirical user behavior studies that identified critical design requirements. Evaluated across two representative VA systems, our approach demonstrates significant improvements in analytical efficiency (quantified via algorithmic metrics and controlled user studies) and user experience (validated through expert interviews and detailed case analyses). This work establishes a novel paradigm for LLM-driven intelligent human–machine collaborative analysis, advancing beyond passive assistance toward adaptive, cognitively aware interaction support.

Technology Category

Application Category

📝 Abstract
Visual analytics (VA) is typically applied to complex data, thus requiring complex tools. While visual analytics empowers analysts in data analysis, analysts may get lost in the complexity occasionally. This highlights the need for intelligent assistance mechanisms. However, even the latest LLM-assisted VA systems only provide help when explicitly requested by the user, making them insufficiently intelligent to offer suggestions when analysts need them the most. We propose a ProactiveVA framework in which LLM-powered UI agent monitors user interactions and delivers context-aware assistance proactively. To design effective proactive assistance, we first conducted a formative study analyzing help-seeking behaviors in user interaction logs, identifying when users need proactive help, what assistance they require, and how the agent should intervene. Based on this analysis, we distilled key design requirements in terms of intent recognition, solution generation, interpretability and controllability. Guided by these requirements, we develop a three-stage UI agent pipeline including perception, reasoning, and acting. The agent autonomously perceives users' needs from VA interaction logs, providing tailored suggestions and intuitive guidance through interactive exploration of the system. We implemented the framework in two representative types of VA systems, demonstrating its generalizability, and evaluated the effectiveness through an algorithm evaluation, case and expert study and a user study. We also discuss current design trade-offs of proactive VA and areas for further exploration.
Problem

Research questions and friction points this paper is trying to address.

LLM-based UI agent for proactive visual analytics assistance
Identifying user needs from interaction logs for timely help
Designing context-aware suggestions in complex VA systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered UI agent monitors interactions
Context-aware proactive assistance framework
Three-stage pipeline: perception, reasoning, acting
🔎 Similar Papers
No similar papers found.
Yuheng Zhao
Yuheng Zhao
Fudan University
Data VisualizationVisual AnalyticsHuman-AI Collaboration
X
Xueli Shu
Fudan University
L
Liwen Fan
Fudan University
L
Lin Gao
Fudan University
Y
Yu Zhang
University of Oxford
S
Siming Chen
Fudan University, Ji Hua Laboratory, Shanghai Key Laboratory of Data Science