InterChat: Enhancing Generative Visual Analytics using Multimodal Interactions

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In generative visual analytics, ambiguous user intent expression and a large semantic gap in multimodal interaction hinder accurate and efficient recognition. To address this, we propose the first progressive analysis system that seamlessly integrates natural language input with direct manipulation of visual elements. Our method introduces: (1) a collaborative multi-LLM agent framework for intent inference, incorporating contextual interaction linking and an extensible multimodal workflow; and (2) cross-modal intent alignment modeling with interaction-history awareness, enhanced by fine-grained prompt engineering and a visual direct-control interface. Two user studies across distinct analytical scenarios demonstrate that our approach improves intent recognition accuracy by 37% and reduces task completion time by 42%, while significantly enhancing system interpretability and usability.

Technology Category

Application Category

📝 Abstract
The rise of Large Language Models (LLMs) and generative visual analytics systems has transformed data-driven insights, yet significant challenges persist in accurately interpreting users' analytical and interaction intents. While language inputs offer flexibility, they often lack precision, making the expression of complex intents inefficient, error-prone, and time-intensive. To address these limitations, we investigate the design space of multimodal interactions for generative visual analytics through a literature review and pilot brainstorming sessions. Building on these insights, we introduce a highly extensible workflow that integrates multiple LLM agents for intent inference and visualization generation. We develop InterChat, a generative visual analytics system that combines direct manipulation of visual elements with natural language inputs. This integration enables precise intent communication and supports progressive, visually driven exploratory data analyses. By employing effective prompt engineering, and contextual interaction linking, alongside intuitive visualization and interaction designs, InterChat bridges the gap between user interactions and LLM-driven visualizations, enhancing both interpretability and usability. Extensive evaluations, including two usage scenarios, a user study, and expert feedback, demonstrate the effectiveness of InterChat. Results show significant improvements in the accuracy and efficiency of handling complex visual analytics tasks, highlighting the potential of multimodal interactions to redefine user engagement and analytical depth in generative visual analytics.
Problem

Research questions and friction points this paper is trying to address.

Addresses challenges in interpreting user intents in visual analytics
Improves precision and efficiency of complex intent communication
Enhances user engagement and analytical depth through multimodal interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal interactions enhance generative visual analytics
Integrates multiple LLM agents for intent inference
Combines direct manipulation with natural language inputs
🔎 Similar Papers
No similar papers found.