ChartGaze: Enhancing Chart Understanding in LVLMs with Eye-Tracking Guided Attention Refinement

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently attend to irrelevant regions in Chart Question Answering (CQA), degrading both reasoning accuracy and interpretability. To address this, we propose an eye-tracking–guided attention alignment mechanism, introducing ChartGaze—the first eye-movement dataset specifically designed for chart reasoning—and a gaze-guided attention refinement method that enables fine-grained calibration of cross-modal (image-text) attention. Our approach requires no architectural modifications to LVLMs and is plug-and-play. Experiments across multiple state-of-the-art LVLMs demonstrate consistent improvements, with average accuracy gains up to 2.56 percentage points. Moreover, the spatial alignment between model-generated attention heatmaps and human fixation maps improves significantly, as measured by Pearson correlation (+18.7%). This work establishes a new paradigm for interpretable chart understanding grounded in human visual behavior.

Technology Category

Application Category

📝 Abstract
Charts are a crucial visual medium for communicating and representing information. While Large Vision-Language Models (LVLMs) have made progress on chart question answering (CQA), the task remains challenging, particularly when models attend to irrelevant regions of the chart. In this work, we present ChartGaze, a new eye-tracking dataset that captures human gaze patterns during chart reasoning tasks. Through a systematic comparison of human and model attention, we find that LVLMs often diverge from human gaze, leading to reduced interpretability and accuracy. To address this, we propose a gaze-guided attention refinement that aligns image-text attention with human fixations. Our approach improves both answer accuracy and attention alignment, yielding gains of up to 2.56 percentage points across multiple models. These results demonstrate the promise of incorporating human gaze to enhance both the reasoning quality and interpretability of chart-focused LVLMs.
Problem

Research questions and friction points this paper is trying to address.

Improving chart question answering accuracy in LVLMs
Aligning model attention with human gaze patterns
Reducing irrelevant region focus during chart interpretation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Eye-tracking dataset for chart reasoning
Gaze-guided attention refinement technique
Aligns model attention with human fixations
🔎 Similar Papers
No similar papers found.