Morphe: High-Fidelity Generative Video Streaming with Vision Foundation Model

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of delivering high-quality, real-time video streaming under bandwidth-constrained or high-packet-loss network conditions, where conventional video streaming struggles to balance visual fidelity and latency, and existing generative approaches fall short in either delay or reconstruction accuracy. We propose the first end-to-end generative video streaming architecture grounded in a Vision Foundation Model (VFM), which jointly optimizes a visual tokenizer, variable-resolution spatiotemporal encoding, intelligent packet loss handling, and generative compression. This integrated design enables robust, low-bandwidth transmission with high perceptual quality. Experimental results demonstrate that our method reduces bandwidth consumption by 62.5% compared to H.265 while maintaining comparable visual quality, and consistently delivers high-fidelity video streams even under severe network degradation.

Technology Category

Application Category

📝 Abstract
Video streaming is a fundamental Internet service, while the quality still cannot be guaranteed especially in poor network conditions such as bandwidth-constrained and remote areas. Existing works mainly work towards two directions: traditional pixel-codec streaming nearly approaches its limit and is hard to step further in compression; the emerging neural-enhanced or generative streaming usually fall short in latency and visual fidelity, hindering their practical deployment. Inspired by the recent success of vision foundation model (VFM), we strive to harness the powerful video understanding and processing capacities of VFM to achieve generalization, high fidelity and loss resilience for real-time video streaming with even higher compression rate. We present the first revolutionized paradigm that enables VFM-based end-to-end generative video streaming towards this goal. Specifically, Morphe employs joint training of visual tokenizers and variable-resolution spatiotemporal optimization under simulated network constraints. Additionally, a robust streaming system is constructed that leverages intelligent packet dropping to resist real-world network perturbations. Extensive evaluation demonstrates that Morphe achieves comparable visual quality while saving 62.5\% bandwidth compared to H.265, and accomplishes real-time, loss-resilient video delivery in challenging network environments, representing a milestone in VFM-enabled multimedia streaming solutions.
Problem

Research questions and friction points this paper is trying to address.

video streaming
visual fidelity
bandwidth-constrained
loss resilience
compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Foundation Model
Generative Video Streaming
Visual Tokenizer
Loss-Resilient Streaming
Spatiotemporal Optimization