Boosting Text-to-Chart Retrieval through Training with Synthesized Semantic Insights

๐Ÿ“… 2025-05-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing text-to-chart retrieval methods suffer from insufficient modeling of chart semantics and contextual information, leading to low retrieval accuracy in business intelligence (BI) scenarios. To address this, we propose ChartFinderโ€”a novel framework featuring a three-tier semantic insight synthesis pipeline (visual, statistical, and task-oriented) and the first real-world BI-driven text-to-chart retrieval benchmark, CRBench. We further introduce a hierarchical semantic enhancement training paradigm that overcomes the semantic blind spots of CLIP-style models in chart understanding. On CRBench, ChartFinder achieves an NDCG@10 of 66.9%, outperforming the state-of-the-art by 11.58%. Moreover, it improves average performance on ambiguous queries by 5%, demonstrating the general effectiveness of multi-granularity semantic modeling for diverse analytical needs.

Technology Category

Application Category

๐Ÿ“ Abstract
Charts are crucial for data analysis and decision-making.Text-to-chart retrieval systems have become increasingly important for Business Intelligence (BI), where users need to find relevant charts that match their analytical needs. These needs can be categorized into precise queries that are well-specified and fuzzy queries that are more exploratory -- both require understanding the semantics and context of the charts. However, existing text-to-chart retrieval solutions often fail to capture the semantic content and contextual information of charts, primarily due to the lack of comprehensive metadata (or semantic insights). To address this limitation, we propose a training data development pipeline that automatically synthesizes hierarchical semantic insights for charts, covering visual patterns (visual-oriented), statistical properties (statistics-oriented), and practical applications (task-oriented), which produces 207,498 semantic insights for 69,166 charts. Based on these, we train a CLIP-based model named ChartFinder to learn better representations of charts for text-to-chart retrieval. Our method leverages rich semantic insights during the training phase to develop a model that understands both visual and semantic aspects of charts.To evaluate text-to-chart retrieval performance, we curate the first benchmark, CRBench, for this task with 21,862 charts and 326 text queries from real-world BI applications, with ground-truth labels verified by the crowd workers.Experiments show that ChartFinder significantly outperforms existing methods in text-to-chart retrieval tasks across various settings. For precise queries, ChartFinder achieves up to 66.9% NDCG@10, which is 11.58% higher than state-of-the-art models. In fuzzy query tasks, our method also demonstrates consistent improvements, with an average increase of 5% across nearly all metrics.
Problem

Research questions and friction points this paper is trying to address.

Enhancing text-to-chart retrieval by capturing semantic content and context
Addressing lack of comprehensive metadata in existing chart retrieval systems
Improving performance for both precise and fuzzy analytical queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically synthesizes hierarchical semantic insights for charts
Trains CLIP-based model ChartFinder with rich semantic insights
Creates benchmark CRBench for text-to-chart retrieval evaluation
๐Ÿ”Ž Similar Papers
No similar papers found.