ReSpec: Relevance and Specificity Grounded Online Filtering for Learning on Video-Text Data Streams

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Rapid growth in video-text streaming data has exacerbated storage and computational bottlenecks, rendering existing online learning methods inadequate in balancing efficiency and downstream task performance. To address this, we propose an unsupervised dual-criterion online filtering framework that dynamically selects high-value samples based on (i) task relevance—measured via probabilistic alignment with downstream zero-shot retrieval objectives—and (ii) semantic specificity—quantified as the distance from root embeddings in the joint embedding space—eliminating the need for full-buffer caching or human annotation. Our approach integrates cross-modal alignment modeling, task-oriented probabilistic evaluation, and streaming-aware real-time decision making. Evaluated on WebVid2M and VideoCC3M, the method achieves state-of-the-art performance across five zero-shot video retrieval benchmarks using only 5% of the data, while substantially reducing both computational cost and memory footprint.

Technology Category

Application Category

📝 Abstract
The rapid growth of video-text data presents challenges in storage and computation during training. Online learning, which processes streaming data in real-time, offers a promising solution to these issues while also allowing swift adaptations in scenarios demanding real-time responsiveness. One strategy to enhance the efficiency and effectiveness of learning involves identifying and prioritizing data that enhances performance on target downstream tasks. We propose Relevance and Specificity-based online filtering framework (ReSpec) that selects data based on four criteria: (i) modality alignment for clean data, (ii) task relevance for target focused data, (iii) specificity for informative and detailed data, and (iv) efficiency for low-latency processing. Relevance is determined by the probabilistic alignment of incoming data with downstream tasks, while specificity employs the distance to a root embedding representing the least specific data as an efficient proxy for informativeness. By establishing reference points from target task data, ReSpec filters incoming data in real-time, eliminating the need for extensive storage and compute. Evaluating on large-scale datasets WebVid2M and VideoCC3M, ReSpec attains state-of-the-art performance on five zeroshot video retrieval tasks, using as little as 5% of the data while incurring minimal compute. The source code is available at https://github.com/cdjkim/ReSpec.
Problem

Research questions and friction points this paper is trying to address.

Efficient online filtering of video-text data streams
Prioritizing task-relevant and informative data for learning
Reducing storage and computation in real-time video-text processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online filtering framework for video-text data
Relevance and specificity based data selection
Real-time processing with minimal storage
🔎 Similar Papers
No similar papers found.