🤖 AI Summary
Traditional MCMC methods for posterior inference of pulsar light curves are computationally prohibitive. This work proposes an efficient alternative that first constructs a latent embedding space of simulated data using a pretrained masked U-Net, enabling rapid initial posterior estimates via nearest-neighbor retrieval. Subsequently, a forward physical simulator guides local hill-climbing optimization within this latent space to refine the likelihood. By uniquely integrating learned latent representations with simulator-informed local optimization, the method achieves a 120-fold acceleration—reducing inference time on real data from PSR J0030+0451 from 24 hours to 12 minutes—while preserving posterior accuracy consistent with full MCMC sampling.
📝 Abstract
Posterior inference from pulsar observations in the form of light curves is commonly performed using Markov chain Monte Carlo methods, which are accurate but computationally expensive. We introduce a framework that accelerates posterior inference while maintaining accuracy by combining learned latent representations with local simulator-guided optimization. A masked U-Net is first pretrained to reconstruct complete light curves from partial observations and to produce informative latent embeddings. Given a query light curve, we identify similar simulated light curves from the simulation bank by measuring similarity in the learned embedding space produced by pretrained U-Net encoder, yielding an initial empirical approximation to the posterior over parameters. This initialization is then refined using a local optimization procedure using hill-climbing updates, guided by a forward simulator, progressively shifting the empirical posterior toward higher-likelihood parameter regions. Experiments on the observed light curve of PSR J0030+0451, captured by NASA's Neutron Star Interior Composition Explorer (NICER), show that our method closely matches posterior estimates obtained using traditional MCMC methods while achieving 120 times reduction in inference time (from 24 hours to 12 minutes), demonstrating the effectiveness of learned representations and simulator-guided optimization for accelerated posterior inference.