🤖 AI Summary
This study addresses a critical challenge in time-delay inference: the likelihood function often exhibits a boundary-driven “W”-shaped structure, which causes standard Bayesian methods to become trapped in spurious edge modes, thereby biasing estimates of the Hubble constant. For the first time, this work systematically uncovers the global pathological structure of the likelihood and its extrapolative nature, clarifying the mechanisms behind the failure of global samplers and the bias introduced by local optimizers. By modeling light curves with Gaussian processes and integrating nested sampling, local MCMC, and optimization algorithms—supported by extensive simulations and convergence analyses—the authors propose practical strategies such as increasing the number of live points. These enhancements substantially improve the robustness and reliability of fully Bayesian time-delay inference.
📝 Abstract
We identify a fundamental pathology in the likelihood for time delay inference which challenges standard inference methods. By analysing the likelihood for time delay inference with Gaussian process light curve models, we show that it generically develops a boundary-driven "W"-shape with a global maximum at the true delay and gradual rises towards the edges of the observation window. This arises because time delay estimation is intrinsically extrapolative. In practice, global samplers such as nested sampling are steered towards spurious edge modes unless strict convergence criteria are adopted. We demonstrate this with simulations and show that the effect strengthens with higher data density over a fixed time span. To ensure convergence, we provide concrete guidance, notably increasing the number of live points. Further, we show that methods implicitly favouring small delays, for example optimisers and local MCMC, induce a bias towards larger $H_0$. Our results clarify failure modes and offer practical remedies for robust fully Bayesian time delay inference.