Hybrelighter: Combining Deep Anisotropic Diffusion and Scene Reconstruction for On-device Real-time Relighting in Mixed Reality

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenges of real-time relighting in mixed reality (MR)—namely high latency in existing deep learning approaches, low geometric fidelity in scan-dependent scene understanding, and the inability of 2D filtering to model geometry-aware shadows—this paper proposes a lightweight, edge-deployable real-time relighting method. Our approach innovatively integrates semantic image segmentation, depth-guided anisotropic diffusion, and monocular lightweight scene reconstruction, enabling geometry-aware light transport and shadow generation without requiring high-precision 3D scans. The end-to-end pipeline achieves 100 FPS on edge devices, significantly outperforming state-of-the-art methods. Evaluated in real-world MR applications such as real estate visualization, it demonstrates high visual fidelity and robust interactivity. Our core contribution is the first integration of depth-conditioned anisotropic diffusion with on-device monocular scene reconstruction for real-time relighting, achieving an unprecedented balance between accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Mixed Reality scene relighting, where virtual changes to lighting conditions realistically interact with physical objects, producing authentic illumination and shadows, can be used in a variety of applications. One such application in real estate could be visualizing a room at different times of day and placing virtual light fixtures. Existing deep learning-based relighting techniques typically exceed the real-time performance capabilities of current MR devices. On the other hand, scene understanding methods, such as on-device scene reconstruction, often yield inaccurate results due to scanning limitations, in turn affecting relighting quality. Finally, simpler 2D image filter-based approaches cannot represent complex geometry and shadows. We introduce a novel method to integrate image segmentation, with lighting propagation via anisotropic diffusion on top of basic scene understanding, and the computational simplicity of filter-based techniques. Our approach corrects on-device scanning inaccuracies, delivering visually appealing and accurate relighting effects in real-time on edge devices, achieving speeds as high as 100 fps. We show a direct comparison between our method and the industry standard, and present a practical demonstration of our method in the aforementioned real estate example.
Problem

Research questions and friction points this paper is trying to address.

Real-time mixed reality relighting on edge devices
Correcting inaccurate on-device scene reconstruction scans
Overcoming limitations of 2D filter-based shadow generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining anisotropic diffusion with scene reconstruction
Correcting scanning inaccuracies for accurate relighting
Achieving real-time performance at 100 fps
🔎 Similar Papers
No similar papers found.
H
Hanwen Zhao
University of Washington, Seattle, Washington, United States
J
John Akers
UW Reality Lab, University of Washington, Seattle, Washington, United States
B
Baback Elmieh
Computer Science, University of Washington, Seattle, Washington, United States
Ira Kemelmacher-Shlizerman
Ira Kemelmacher-Shlizerman
Professor of Computer Science at University of Washington, Principal Scientist at Google
Computer VisionComputer GraphicsGenerative AI