HybridPrompt: Bridging Generative Priors and Traditional Codecs for Mobile Streaming

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the trade-off between compression efficiency and real-time decoding performance in video codecs. Traditional codecs suffer severe quality degradation at low bitrates, while purely generative neural codecs struggle to achieve real-time decoding on mobile devices. To bridge this gap, the authors propose a hybrid architecture that encodes keyframes using a generative model while processing other frames with a conventional codec, enabling end-to-end differentiable joint optimization. The method achieves real-time 1080p video decoding (>150 FPS) on commercial smartphones—the first such demonstration to date—and leverages inter-frame supervision to refine keyframe generation. At 200 kbps, it improves LPIPS scores by 8% on average over traditional approaches, while offering several orders of magnitude faster decoding than pure neural methods.

Technology Category

Application Category

📝 Abstract
In Video on Demand (VoD) scenarios, traditional codecs are the industry standard due to their high decoding efficiency. However, they suffer from severe quality degradation under low bandwidth conditions. While emerging generative neural codecs offer significantly higher perceptual quality, their reliance on heavy frame-by-frame generation makes real-time playback on mobile devices impractical. We ask: is it possible to combine the blazing-fast speed of traditional standards with the superior visual fidelity of neural approaches? We present HybridPrompt, the first generative-based video system capable of achieving real-time 1080p decoding at over 150 FPS on a commercial smartphone. Specifically, we employ a hybrid architecture that encodes Keyframes using a generative model while relying on traditional codecs for the remaining frames. A major challenge is that the two paradigms have conflicting objectives: the "hallucinated" details from generative models often misalign with the rigid prediction mechanisms of traditional codecs, causing bitrate inefficiency. To address this, we demonstrate that the traditional decoding process is differentiable, enabling an end-to-end optimization loop. This allows us to use subsequent frames as additional supervision, forcing the generative model to synthesize keyframes that are not only perceptually high-fidelity but also mathematically optimal references for the traditional codec. By integrating a two-stage generation strategy, our system outperforms pure neural baselines by orders of magnitude in speed while achieving an average LPIPS gain of 8% over traditional codecs at 200kbps.
Problem

Research questions and friction points this paper is trying to address.

video compression
generative models
mobile streaming
neural codecs
perceptual quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

HybridPrompt
generative neural codecs
differentiable decoding
end-to-end optimization
mobile video streaming
🔎 Similar Papers
No similar papers found.
L
Liming Liu
Peking University
J
Jiangkai Wu
Peking University
H
Haoyang Wang
Peking University
P
Peiheng Wang
Peking University
Z
Zongming Guo
Peking University
Xinggong Zhang
Xinggong Zhang
Peking University
AI-driven Multimedia NetworkingVideo CommunicationTransport Protocol