🤖 AI Summary
This study addresses short-video popularity prediction by proposing a multimodal, dual-path joint modeling framework. Methodologically, it introduces a novel video-text collaborative prompting mechanism to elicit semantic descriptions from Qwen-VL or Video-LLaMA; integrates visual features extracted via ResNet/I3D/ViT, subtitle embeddings encoded by BERT, and handcrafted temporal and social features; and jointly predicts four engagement metrics—views, likes, comments, and shares—using a multi-head neural regression network alongside XGBoost. Experiments on a real-world short-video dataset show an 18.7% reduction in MAE over unimodal baselines, demonstrating superior generalization and robustness. Key contributions are: (1) a video-text collaborative prompting paradigm for enhanced content understanding; and (2) a unified framework for deep fusion and joint optimization of four heterogeneous feature modalities—visual, linguistic, temporal, and social.
📝 Abstract
This work aims to predict the popularity of short videos using the videos themselves and their related features. Popularity is measured by four key engagement metrics: view count, like count, comment count, and share count. This study employs video classification models with different architectures and training methods as backbone networks to extract video modality features. Meanwhile, the cleaned video captions are incorporated into a carefully designed prompt framework, along with the video, as input for video-to-text generation models, which generate detailed text-based video content understanding. These texts are then encoded into vectors using a pre-trained BERT model. Based on the six sets of vectors mentioned above, a neural network is trained for each of the four prediction metrics. Moreover, the study conducts data mining and feature engineering based on the video and tabular data, constructing practical features such as the total frequency of hashtag appearances, the total frequency of mention appearances, video duration, frame count, frame rate, and total time online. Multiple machine learning models are trained, and the most stable model, XGBoost, is selected. Finally, the predictions from the neural network and XGBoost models are averaged to obtain the final result.