Can VLM Pseudo-Labels Train a Time-Series QA Model That Outperforms the VLM?

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of labeled data in time-series question answering (TSQA), this paper proposes a supervised training paradigm leveraging pseudo-label distillation from vision-language models (VLMs). While VLM-generated pseudo-labels are inherently noisy, the distilled lightweight TSQA model significantly outperforms the original VLM’s zero-shot performance. Methodologically, we employ a VLM to generate question-answer pairs over large-scale unlabeled time-series charts, then use these pseudo-labels to supervise a dedicated time-series encoder-decoder model. Experiments demonstrate state-of-the-art performance across multiple TSQA benchmarks. Crucially, this work provides the first systematic evidence that deep models’ robustness to pseudo-label noise can be effectively harnessed to enhance temporal reasoning—transforming label imperfection into semantic gain. The approach establishes a novel, low-resource paradigm for time-series semantic parsing.

Technology Category

Application Category

📝 Abstract
Time-series question answering (TSQA) tasks face significant challenges due to the lack of labeled data. Alternatively, with recent advancements in large-scale models, vision-language models (VLMs) have demonstrated the potential to analyze time-series signals in a zero-shot manner. In this paper, we propose a training approach that uses pseudo labels generated by a VLM. Although VLMs can produce incorrect labels, TSQA models can still be effectively trained based on the property that deep neural networks are inherently robust to such noisy labels. Our experimental results demonstrate that TSQA models are not only successfully trained with pseudo labels, but also surpass the performance of the VLM itself by leveraging a large amount of unlabeled data.
Problem

Research questions and friction points this paper is trying to address.

Addressing labeled data scarcity in time-series question answering tasks
Training TSQA models using VLM-generated pseudo-labels despite inaccuracies
Leveraging unlabeled data to outperform original VLM performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using VLM pseudo-labels for training
Leveraging neural network noise robustness
Training outperforms VLM with unlabeled data
🔎 Similar Papers
No similar papers found.