🤖 AI Summary
Traditional ray tracing requires iterative surface-by-surface intersection computations, resulting in high computational cost and low efficiency. This paper proposes Ray2Ray—the first method to integrate implicit neural representations (INRs) into end-to-end optical ray tracing—establishing a physics-constrained mapping from incident to emergent rays, replacing multi-step geometric intersection with a single forward pass. Our approach jointly incorporates geometric optics priors and differentiable rendering, trained under supervision from real-world commercial optical system data. Evaluated on nine diverse physical optical systems, Ray2Ray achieves emergent ray positioning errors ≤1 μm and angular deviations ≈0.01°, while accelerating inference significantly. It preserves micrometer-level spatial accuracy and sub-arcminute angular precision. To our knowledge, this is the first work to realize a high-fidelity, efficient, and generalizable neural optical surrogate model.
📝 Abstract
Ray tracing is a widely used technique for modeling optical systems, involving sequential surface-by-surface computations, which can be computationally intensive. We propose Ray2Ray, a novel method that leverages implicit neural representations to model optical systems with greater efficiency, eliminating the need for surface-by-surface computations in a single pass end-to-end model. Ray2Ray learns the mapping between rays emitted from a given source and their corresponding rays after passing through a given optical system in a physically accurate manner. We train Ray2Ray on nine off-the-shelf optical systems, achieving positional errors on the order of 1μm and angular deviations on the order 0.01 degrees in the estimated output rays. Our work highlights the potential of neural representations as a proxy for optical raytracer.