π€ AI Summary
Existing hologram generation methods require separate model training for each set of display parameters (e.g., propagation distance, volumetric depth, wavelength), resulting in poor generalizability and high deployment overhead. To address this, we propose the first real-time holographic synthesis framework supporting multidimensional, configurable display- and scene-related parameters. Our approach uncovers an intrinsic learning-level connection between depth estimation and hologram synthesis; introduces a continuously tunable conditional modeling mechanism enabling single-model joint adaptation to propagation distance, volumetric depth, brightness, pixel pitch, and wavelength; and integrates conditional neural radiance fields (cNeRF) with differentiable optical propagation modeling, embedding geometric priors and physical constraints for end-to-end optimization. Evaluated on simulations and two prototype systems, our method achieves high-fidelity 3D reconstruction, inference speed twice that of state-of-the-art methods, and significantly improved cross-parameter generalization.
π Abstract
Emerging learned holography approaches have enabled faster and high-quality hologram synthesis, setting a new milestone toward practical holographic displays. However, these learned models require training a dedicated model for each set of display-scene parameters. To address this shortcoming, our work introduces a highly configurable learned model structure, synthesizing 3D holograms interactively while supporting diverse display-scene parameters. Our family of models relying on this structure can be conditioned continuously for varying novel scene parameters, including input images, propagation distances, volume depths, peak brightnesses, and novel display parameters of pixel pitches and wavelengths. Uniquely, our findings unearth a correlation between depth estimation and hologram synthesis tasks in the learning domain, leading to a learned model that unlocks accurate 3D hologram generation from 2D images across varied display-scene parameters. We validate our models by synthesizing high-quality 3D holograms in simulations and also verify our findings with two different holographic display prototypes. Moreover, our family of models can synthesize holograms with a 2x speed-up compared to the state-of-the-art learned holography approaches in the literature.