Multi-Modal Video Feature Extraction for Popularity Prediction

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses short-video popularity prediction by proposing a multimodal, dual-path joint modeling framework. Methodologically, it introduces a novel video-text collaborative prompting mechanism to elicit semantic descriptions from Qwen-VL or Video-LLaMA; integrates visual features extracted via ResNet/I3D/ViT, subtitle embeddings encoded by BERT, and handcrafted temporal and social features; and jointly predicts four engagement metrics—views, likes, comments, and shares—using a multi-head neural regression network alongside XGBoost. Experiments on a real-world short-video dataset show an 18.7% reduction in MAE over unimodal baselines, demonstrating superior generalization and robustness. Key contributions are: (1) a video-text collaborative prompting paradigm for enhanced content understanding; and (2) a unified framework for deep fusion and joint optimization of four heterogeneous feature modalities—visual, linguistic, temporal, and social.

Technology Category

Application Category

📝 Abstract
This work aims to predict the popularity of short videos using the videos themselves and their related features. Popularity is measured by four key engagement metrics: view count, like count, comment count, and share count. This study employs video classification models with different architectures and training methods as backbone networks to extract video modality features. Meanwhile, the cleaned video captions are incorporated into a carefully designed prompt framework, along with the video, as input for video-to-text generation models, which generate detailed text-based video content understanding. These texts are then encoded into vectors using a pre-trained BERT model. Based on the six sets of vectors mentioned above, a neural network is trained for each of the four prediction metrics. Moreover, the study conducts data mining and feature engineering based on the video and tabular data, constructing practical features such as the total frequency of hashtag appearances, the total frequency of mention appearances, video duration, frame count, frame rate, and total time online. Multiple machine learning models are trained, and the most stable model, XGBoost, is selected. Finally, the predictions from the neural network and XGBoost models are averaged to obtain the final result.
Problem

Research questions and friction points this paper is trying to address.

Short Video
Popularity Prediction
Machine Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Video Popularity Prediction
BERT Representation
XGBoost Model
🔎 Similar Papers
No similar papers found.
Haixu Liu
Haixu Liu
The University of Sydney
Deep Learning Computer Vision LLM
W
Wenning Wang
The University of Sydney, Camperdown, New South Wales, 2000, Australia
H
Haoxiang Zheng
The University of Sydney, Camperdown, New South Wales, 2000, Australia
P
Penghao Jiang
The University of Sydney, Camperdown, New South Wales, 2000, Australia
Q
Qirui Wang
The University of Sydney, Camperdown, New South Wales, 2000, Australia
Ruiqing Yan
Ruiqing Yan
The University of Sydney, Camperdown, New South Wales, 2000, Australia
Qiuzhuang Sun
Qiuzhuang Sun
University of Sydney
Reliability engineeringIndustrial statisticsMaintenance