You Only Render Once: Enhancing Energy and Computation Efficiency of Mobile Virtual Reality

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational and energy bottlenecks in mobile VR caused by rendering stereo images twice independently, this paper proposes EffVR, an efficient single-rendering stereo synthesis framework. Methodologically, EffVR pioneers pixel-level geometric guidance—leveraging depth and surface normal maps—to synthesize stereo views from a single monocular render, thereby departing from conventional dual-rendering paradigms. It integrates monocular rendering, explicit disparity modeling, a lightweight reprojection network, and co-optimized scheduling for mobile GPU execution. Extensive experiments demonstrate that EffVR achieves, on average, a 27% reduction in power consumption, a 115.2% increase in frame rate, and high perceptual fidelity with SSIM of 0.9679 and PSNR of 34.09, outperforming state-of-the-art methods. The source code and an Android application are publicly released.

Technology Category

Application Category

📝 Abstract
Mobile Virtual Reality (VR) is essential to achieving convenient and immersive human-computer interaction and realizing emerging applications such as Metaverse. However, existing VR technologies require two separate renderings of binocular images, causing a significant bottleneck for mobile devices with limited computing capability and power supply. This paper proposes an approach to rendering optimization for mobile VR called EffVR. By utilizing the per-pixel attribute, EffVR can generate binocular VR images from the monocular image through genuinely one rendering, saving half the computation over conventional approaches. Our evaluation indicates that, compared with the state-of-art, EffVRcan save 27% power consumption on average while achieving high binocular image quality (0.9679 SSIM and 34.09 PSNR) in mobile VR applications. Additionally, EffVR can increase the frame rate by 115.2%. These results corroborate EffVRsuperior computation/energy-saving performance, paving the road to a sustainable mobile VR. The source code, demo video, android app, and more are released anonymously at https://yoro-vr.github.io/
Problem

Research questions and friction points this paper is trying to address.

Reducing dual rendering bottleneck in mobile VR
Improving energy efficiency for mobile VR devices
Enhancing computation speed in VR image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates binocular images from monocular rendering
Reduces power consumption by 27% on average
Increases frame rate by 115.2%
🔎 Similar Papers
2023-04-24International Symposium on Computer ArchitectureCitations: 47
X
Xingyu Chen
University of California San Diego
Xinmin Fang
Xinmin Fang
University of Colorado Denver
Internet of ThingsRobotics
S
Shuting Zhang
Guangdong University of Technology
X
Xinyu Zhang
University of California San Diego
L
Liang He
University of Nebraska–Lincoln
Zhengxiong Li
Zhengxiong Li
Assistant Professor, University of Colorado Denver | Anschutz Medical Campus
IoT/MobileAI Robotics