🤖 AI Summary
To address insufficient robustness in viewport prediction for 360° video streaming and error accumulation caused by oversimplified multimodal fusion, this paper abandons the conventional trajectory regression paradigm and proposes a novel tile-level binary classification–driven viewport prediction framework. Our key contributions are: (1) a pioneering tile-wise interest binary classification mechanism—replacing continuous coordinate regression; (2) a multimodal fusion Transformer that jointly models intra-modal long-range dependencies in user historical behavior and video content features, as well as cross-modal interactions; and (3) a region aggregation strategy to enhance spatial consistency. Evaluated on the PVS-HM and Xu-Gaze datasets, our method achieves state-of-the-art average prediction accuracy and IoU, with controllable computational overhead. It delivers high precision, strong robustness against noisy inputs and distribution shifts, and improved interpretability through discrete tile-level predictions.
📝 Abstract
Viewport prediction is a crucial aspect of tile-based 360° video streaming system. However, existing trajectory based methods lack of robustness, also oversimplify the process of information construction and fusion between different modality inputs, leading to the error accumulation problem. In this paper, we propose a tile classification based viewport prediction method with Multi-modal Fusion Transformer, namely MFTR. Specifically, MFTR utilizes transformer-based networks to extract the long-range dependencies within each modality, then mine intra- and inter-modality relations to capture the combined impact of user historical inputs and video contents on future viewport selection. In addition, MFTR categorizes future tiles into two categories: user interested or not, and selects future viewport as the region that contains most user interested tiles. Comparing with predicting head trajectories, choosing future viewport based on tile's binary classification results exhibits better robustness and interpretability. To evaluate our proposed MFTR, we conduct extensive experiments on two widely used PVS-HM and Xu-Gaze dataset. MFTR shows superior performance over state-of-the-art methods in terms of average prediction accuracy and overlap ratio, also presents competitive computation efficiency.