InReAcTable: LLM-Powered Interactive Visual Data Story Construction from Tabular Data

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of organizing discrete insights from tabular data into coherent visual narratives, this paper proposes a dual-module framework. First, a structured insight graph enables efficient retrieval based on relational and semantic criteria. Second, a large language model (LLM)-driven semantic reasoning module integrates structural filtering with retrieval-augmented generation (RAG) to deliver dynamic, user-intent-aligned insight recommendations. The framework supports interactive, goal-oriented data storytelling, allowing users to iteratively refine narrative logic. Evaluated through case studies and user experiments, the system significantly improves storytelling efficiency—reducing construction time by 42% on average—and enhances narrative quality, increasing coherence and insight coverage by 37% and 31%, respectively. Moreover, it strengthens users’ ability to comprehend and manipulate complex insight interrelationships.

Technology Category

Application Category

📝 Abstract
Insights in tabular data capture valuable patterns that help analysts understand critical information. Organizing related insights into visual data stories is crucial for in-depth analysis. However, constructing such stories is challenging because of the complexity of the inherent relations between extracted insights. Users face difficulty sifting through a vast number of discrete insights to integrate specific ones into a unified narrative that meets their analytical goals. Existing methods either heavily rely on user expertise, making the process inefficient, or employ automated approaches that cannot fully capture their evolving goals. In this paper, we introduce InReAcTable, a framework that enhances visual data story construction by establishing both structural and semantic connections between data insights. Each user interaction triggers the Acting module, which utilizes an insight graph for structural filtering to narrow the search space, followed by the Reasoning module using the retrieval-augmented generation method based on large language models for semantic filtering, ultimately providing insight recommendations aligned with the user's analytical intent. Based on the InReAcTable framework, we develop an interactive prototype system that guides users to construct visual data stories aligned with their analytical requirements. We conducted a case study and a user experiment to demonstrate the utility and effectiveness of the InReAcTable framework and the prototype system for interactively building visual data stories.
Problem

Research questions and friction points this paper is trying to address.

Automating visual data story construction from tabular insights
Reducing user effort in integrating discrete analytical insights
Aligning insight recommendations with evolving analytical goals
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-powered semantic filtering for insights
Interactive insight graph for structural connections
Retrieval-augmented generation for analytical intent alignment
🔎 Similar Papers
No similar papers found.
G
Gerile Aodeng
Beijing Institute of Technology, Beijing, China
G
Guozheng Li
Beijing Institute of Technology, Beijing, China
Y
Yunshan Feng
Beijing Institute of Technology, Beijing, China
Q
Qiyang Chen
Beijing Institute of Technology, Beijing, China
Y
Yu Zhang
University of Oxford, Oxford, United Kingdom
Chi Harold Liu
Chi Harold Liu
Professor, Vice Dean, Fellow of IET and BCS, Beijing Institute of Technology
IoTMobile Crowd SensingUAV CrowdsensingEmbodied AIDeep Reinforcement Learning