🤖 AI Summary
To address the challenge in high-dimensional light transport sampling where conventional MCMC methods struggle to balance local exploration efficiency and global convergence, this paper introduces the first rendering-oriented unified continuous-time MCMC framework. Methodologically, it (1) pioneers the integration of continuous-time MCMC into differentiable rendering, enabling embedded reconstruction of arbitrary light transport algorithms; (2) proposes a tunable Markov chain recalibration mechanism—Jump-Restore—that rigorously preserves the target distribution’s invariance; and (3) incorporates parallelizable embedded transformations for scalable deployment. Experimental results demonstrate substantial reductions in estimator variance and relative error, accelerated runtime, and improved parallel scalability. Crucially, the framework is fully compatible with all existing MCMC-based light transport algorithms, permitting seamless, lossless integration and upgrade.
📝 Abstract
Markov chain Monte Carlo (MCMC) algorithms come to rescue when sampling from a complex, high-dimensional distribution by a conventional method is intractable. Even though MCMC is a powerful tool, it is also hard to control and tune in practice. Simultaneously achieving both local exploration of the state space and global discovery of the target distribution is a challenging task. In this work, we present a MCMC formulation that subsumes all existing MCMC samplers employed in rendering. We then present a novel framework for adjusting an arbitrary Markov chain, making it exhibit invariance with respect to a specified target distribution. To showcase the potential of the proposed framework, we focus on a first simple application in light transport simulation. As a by-product, we introduce continuous-time MCMC sampling to the computer graphics community. We show how any existing MCMC-based light transport algorithm can be embedded into our framework. We empirically and theoretically prove that this embedding is superior to running the standalone algorithm. In fact, our approach will convert any existing algorithm into a highly parallelizable variant with shorter running time, smaller error and less variance.