🤖 AI Summary
Likelihood maximization is intractable when the likelihood function lacks an analytic form but the model remains simulatable.
Method: This paper proposes a sequential gradient optimization method based on local score matching, which directly models the Fisher score and maximizes likelihood via simulated data—bypassing density estimation or MCMC sampling.
Contribution/Results: We introduce the first linearly parameterized closed-form least-squares score estimator, with theoretically guaranteed bias bounds and empirically demonstrated robustness and efficiency. Evaluated across diverse synthetic and real-world tasks—including generative modeling and neural ODEs—the method achieves state-of-the-art performance, significantly outperforming existing baselines in convergence speed, training stability, and final likelihood improvement.
📝 Abstract
We study the problem of likelihood maximization when the likelihood function is intractable but model simulations are readily available. We propose a sequential, gradient-based optimization method that directly models the Fisher score based on a local score matching technique which uses simulations from a localized region around each parameter iterate. By employing a linear parameterization to the surrogate score model, our technique admits a closed-form, least-squares solution. This approach yields a fast, flexible, and efficient approximation to the Fisher score, effectively smoothing the likelihood objective and mitigating the challenges posed by complex likelihood landscapes. We provide theoretical guarantees for our score estimator, including bounds on the bias introduced by the smoothing. Empirical results on a range of synthetic and real-world problems demonstrate the superior performance of our method compared to existing benchmarks.