🤖 AI Summary
To address semantic dilution—causing visual blurring—induced by multi-head self-attention (MHSA) in Transformer-based video frame prediction, and the misalignment between embedding-space prediction targets and pixel-level reconstruction losses, this paper proposes Semantic-Condensed Multi-Head Self-Attention (SCMHSA). SCMHSA explicitly aligns prediction objectives with reconstruction losses in the latent space. We further design a latent-space consistency loss and adopt embedding-level optimization. This work is the first to achieve representation-loss co-modeling in the latent space, effectively mitigating representation distortion. Evaluated on multiple standard video prediction benchmarks, our method significantly outperforms baseline Transformers: PSNR improves by 2.1–3.8 dB, SSIM by +0.042, and temporal coherence (measured by LPIPS-T) improves by 17.3%. These results demonstrate superior accuracy and dynamic modeling capability.
📝 Abstract
Next-frame prediction in videos is crucial for applications such as autonomous driving, object tracking, and motion prediction. The primary challenge in next-frame prediction lies in effectively capturing and processing both spatial and temporal information from previous video sequences. The transformer architecture, known for its prowess in handling sequence data, has made remarkable progress in this domain. However, transformer-based next-frame prediction models face notable issues: (a) The multi-head self-attention (MHSA) mechanism requires the input embedding to be split into $N$ chunks, where $N$ is the number of heads. Each segment captures only a fraction of the original embeddings information, which distorts the representation of the embedding in the latent space, resulting in a semantic dilution problem; (b) These models predict the embeddings of the next frames rather than the frames themselves, but the loss function based on the errors of the reconstructed frames, not the predicted embeddings -- this creates a discrepancy between the training objective and the model output. We propose a Semantic Concentration Multi-Head Self-Attention (SCMHSA) architecture, which effectively mitigates semantic dilution in transformer-based next-frame prediction. Additionally, we introduce a loss function that optimizes SCMHSA in the latent space, aligning the training objective more closely with the model output. Our method demonstrates superior performance compared to the original transformer-based predictors.