MisVisFix: An Interactive Dashboard for Detecting, Explaining, and Correcting Misleading Visualizations using Large Language Models

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Misleading visualizations severely compromise data interpretation accuracy, yet existing tools lack comprehensive support for detection, attribution, and correction. This paper introduces the first large language model (LLM)-based interactive visualization debugging system, covering all 74 known misleading patterns. It enables fine-grained identification, dynamic adaptation to emerging deceptive strategies, and automatic generation of corrected visualizations. The system integrates Claude and GPT-series models with rule-augmented prompt engineering, a dedicated visualization analysis engine, and a natural language dialogue interface. In benchmark evaluation, it achieves a false positive rate below 4%. An expert user study demonstrates significant improvements in misleading pattern identification accuracy, fact-checking efficiency, and chart credibility. Our core contribution is the first end-to-end, LLM-driven visualization debugging framework—unifying detection, root-cause attribution, and remediation within a single, interactive system.

Technology Category

Application Category

📝 Abstract
Misleading visualizations pose a significant challenge to accurate data interpretation. While recent research has explored the use of Large Language Models (LLMs) for detecting such misinformation, practical tools that also support explanation and correction remain limited. We present MisVisFix, an interactive dashboard that leverages both Claude and GPT models to support the full workflow of detecting, explaining, and correcting misleading visualizations. MisVisFix correctly identifies 96% of visualization issues and addresses all 74 known visualization misinformation types, classifying them as major, minor, or potential concerns. It provides detailed explanations, actionable suggestions, and automatically generates corrected charts. An interactive chat interface allows users to ask about specific chart elements or request modifications. The dashboard adapts to newly emerging misinformation strategies through targeted user interactions. User studies with visualization experts and developers of fact-checking tools show that MisVisFix accurately identifies issues and offers useful suggestions for improvement. By transforming LLM-based detection into an accessible, interactive platform, MisVisFix advances visualization literacy and supports more trustworthy data communication.
Problem

Research questions and friction points this paper is trying to address.

Detecting and correcting misleading visualizations using LLMs
Providing explanations and actionable suggestions for chart issues
Supporting interactive user engagement to improve visualization accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Claude and GPT for detection and correction
Classifies 74 visualization misinformation types automatically
Provides interactive chat for user modifications
🔎 Similar Papers
No similar papers found.