Tile Classification Based Viewport Prediction with Multi-modal Fusion Transformer

📅 2023-09-26
🏛️ ACM Multimedia
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
To address insufficient robustness in viewport prediction for 360° video streaming and error accumulation caused by oversimplified multimodal fusion, this paper abandons the conventional trajectory regression paradigm and proposes a novel tile-level binary classification–driven viewport prediction framework. Our key contributions are: (1) a pioneering tile-wise interest binary classification mechanism—replacing continuous coordinate regression; (2) a multimodal fusion Transformer that jointly models intra-modal long-range dependencies in user historical behavior and video content features, as well as cross-modal interactions; and (3) a region aggregation strategy to enhance spatial consistency. Evaluated on the PVS-HM and Xu-Gaze datasets, our method achieves state-of-the-art average prediction accuracy and IoU, with controllable computational overhead. It delivers high precision, strong robustness against noisy inputs and distribution shifts, and improved interpretability through discrete tile-level predictions.
📝 Abstract
Viewport prediction is a crucial aspect of tile-based 360° video streaming system. However, existing trajectory based methods lack of robustness, also oversimplify the process of information construction and fusion between different modality inputs, leading to the error accumulation problem. In this paper, we propose a tile classification based viewport prediction method with Multi-modal Fusion Transformer, namely MFTR. Specifically, MFTR utilizes transformer-based networks to extract the long-range dependencies within each modality, then mine intra- and inter-modality relations to capture the combined impact of user historical inputs and video contents on future viewport selection. In addition, MFTR categorizes future tiles into two categories: user interested or not, and selects future viewport as the region that contains most user interested tiles. Comparing with predicting head trajectories, choosing future viewport based on tile's binary classification results exhibits better robustness and interpretability. To evaluate our proposed MFTR, we conduct extensive experiments on two widely used PVS-HM and Xu-Gaze dataset. MFTR shows superior performance over state-of-the-art methods in terms of average prediction accuracy and overlap ratio, also presents competitive computation efficiency.
Problem

Research questions and friction points this paper is trying to address.

Improving robustness in 360 video viewport prediction
Enhancing multi-modal input fusion for viewport selection
Reducing error accumulation in tile-based classification methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal Fusion Transformer for viewport prediction
Tile classification based on user interest
Long-range dependencies extraction with transformers
🔎 Similar Papers
No similar papers found.
Z
Zhihao Zhang
School of Computer Science and Technology, Xi’an Jiaotong University & Key Laboratory of Intelligent Networks and Network Security, Ministry of Education, Xi’an Jiaotong University, Xi’an, China
Yiwei Chen
Yiwei Chen
Yunnan University, Zhejiang Uinversity
Signal processingDeep learningComputational imagingQuantum machine learning
Weizhan Zhang
Weizhan Zhang
Professor,Department of Computer Science and Technology, Xi'an Jiaotong University
Multimedia networking
C
Caixia Yan
School of Computer Science and Technology, Xi’an Jiaotong University & National Engineering Lab for Big Data Analytics, Xi’an Jiaotong University, Xi’an, China
Qinghua Zheng
Qinghua Zheng
Xi'an Jiaotong University
AI
Q
Qi Wang
MIGU Video, Shanghai, China
W
Wangdu Chen
MIGU Video, Shanghai, China