🤖 AI Summary
To address the severe diffraction crosstalk that degrades axial resolution when axial planes are closely spaced in 3D image projection, this paper proposes a single-shot, densely multiplexed depth-encoding method. Our approach jointly optimizes a Fourier-domain encoding network and a multi-layer diffractive optical decoder via end-to-end deep learning, enabling simultaneous high-fidelity projection of 28 independent images at subwavelength axial spacing (~532 nm). Key contributions include: (i) the first differentiable diffractive wavefront decoding architecture, supporting dynamically tunable axial planes; (ii) overcoming conventional diffraction crosstalk limits, achieving voxel-level axial separation at the wavelength scale; and (iii) experimental validation demonstrating <4.2% projection error, 3.1× higher decoding efficiency, and significantly improved image fidelity over state-of-the-art methods.
📝 Abstract
3D image display is essential for next-generation volumetric imaging; however, dense depth multiplexing for 3D image projection remains challenging because diffraction-induced cross-talk rapidly increases as the axial image planes get closer. Here, we introduce a 3D display system comprising a digital encoder and a diffractive optical decoder, which simultaneously projects different images onto multiple target axial planes with high axial resolution. By leveraging multi-layer diffractive wavefront decoding and deep learning-based end-to-end optimization, the system achieves high-fidelity depth-resolved 3D image projection in a snapshot, enabling axial plane separations on the order of a wavelength. The digital encoder leverages a Fourier encoder network to capture multi-scale spatial and frequency-domain features from input images, integrates axial position encoding, and generates a unified phase representation that simultaneously encodes all images to be axially projected in a single snapshot through a jointly-optimized diffractive decoder. We characterized the impact of diffractive decoder depth, output diffraction efficiency, spatial light modulator resolution, and axial encoding density, revealing trade-offs that govern axial separation and 3D image projection quality. We further demonstrated the capability to display volumetric images containing 28 axial slices, as well as the ability to dynamically reconfigure the axial locations of the image planes, performed on demand. Finally, we experimentally validated the presented approach, demonstrating close agreement between the measured results and the target images. These results establish the diffractive 3D display system as a compact and scalable framework for depth-resolved snapshot 3D image projection, with potential applications in holographic displays, AR/VR interfaces, and volumetric optical computing.