Gradient-Informed Monte Carlo Fine-Tuning of Diffusion Models for Low-Thrust Trajectory Design

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high-dimensional, multi-modal global optimization challenge of low-thrust trajectory design for spacecraft in the Saturn–Titan circular restricted three-body problem. We propose a novel gradient-guided diffusion model synergized with Markov Chain Monte Carlo (MCMC) sampling. Specifically, analytical gradients are embedded into Metropolis-Adjusted Langevin Algorithm (MALA) and Hamiltonian Monte Carlo samplers; these gradient-informed samples then drive reward-weighted likelihood fine-tuning of the diffusion model—bypassing conventional data generation to enable efficient focusing on Pareto-optimal solutions. Experiments demonstrate a substantial increase in feasible solution ratio from 17.34% to 63.01%, alongside improved coverage density and diversity of the Pareto front. Among the MCMC variants, MALA achieves the best trade-off between optimization performance and computational cost.

Technology Category

Application Category

📝 Abstract
Preliminary mission design of low-thrust spacecraft trajectories in the Circular Restricted Three-Body Problem is a global search characterized by a complex objective landscape and numerous local minima. Formulating the problem as sampling from an unnormalized distribution supported on neighborhoods of locally optimal solutions, provides the opportunity to deploy Markov chain Monte Carlo methods and generative machine learning. In this work, we extend our previous self-supervised diffusion model fine-tuning framework to employ gradient-informed Markov chain Monte Carlo. We compare two algorithms - the Metropolis-Adjusted Langevin Algorithm and Hamiltonian Monte Carlo - both initialized from a distribution learned by a diffusion model. Derivatives of an objective function that balances fuel consumption, time of flight and constraint violations are computed analytically using state transition matrices. We show that incorporating the gradient drift term accelerates mixing and improves convergence of the Markov chain for a multi-revolution transfer in the Saturn-Titan system. Among the evaluated methods, MALA provides the best trade-off between performance and computational cost. Starting from samples generated by a baseline diffusion model trained on a related transfer, MALA explicitly targets Pareto-optimal solutions. Compared to a random walk Metropolis algorithm, it increases the feasibility rate from 17.34% to 63.01% and produces a denser, more diverse coverage of the Pareto front. By fine-tuning a diffusion model on the generated samples and associated reward values with reward-weighted likelihood maximization, we learn the global solution structure of the problem and eliminate the need for a tedious separate data generation phase.
Problem

Research questions and friction points this paper is trying to address.

Optimizes low-thrust spacecraft trajectories using gradient-informed Monte Carlo
Enhances feasibility and Pareto front coverage in Saturn-Titan transfers
Fine-tunes diffusion models to learn global solution structure efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses gradient-informed MCMC to accelerate diffusion model fine-tuning
Applies MALA and HMC initialized from diffusion model distribution
Fine-tunes diffusion model via reward-weighted likelihood maximization
🔎 Similar Papers
No similar papers found.