Look-Ahead and Look-Back Flows: Training-Free Image Generation with Trajectory Smoothing

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of error accumulation in flow matching–based image generation caused by velocity field adjustments. To mitigate this without requiring retraining, the authors propose two training-free latent trajectory smoothing strategies: Look-Ahead and Look-Back. The Look-Ahead method employs curvature-gated weighted averaging to leverage future trajectory information for forward optimization, while Look-Back dynamically refines the trajectory by applying exponential moving averages over historical paths. As the first trajectory-based—rather than velocity-field-based—smoothing mechanism within the flow matching framework that operates without additional training, the approach effectively suppresses error propagation. Experimental results demonstrate consistently superior generation quality compared to existing training-free methods across multiple benchmarks, including COCO17, CUB-200, and Flickr30K.

Technology Category

Application Category

📝 Abstract
Recent advances have reformulated diffusion models as deterministic ordinary differential equations (ODEs) through the framework of flow matching, providing a unified formulation for the noise-to-data generative process. Various training-free flow matching approaches have been developed to improve image generation through flow velocity field adjustment, eliminating the need for costly retraining. However, Modifying the velocity field $v$ introduces errors that propagate through the full generation path, whereas adjustments to the latent trajectory $z$ are naturally corrected by the pretrained velocity network, reducing error accumulation. In this paper, we propose two complementary training-free latent-trajectory adjustment approaches based on future and past velocity $v$ and latent trajectory $z$ information that refine the generative path directly in latent space. We propose two training-free trajectory smoothing schemes: \emph{Look-Ahead}, which averages the current and next-step latents using a curvature-gated weight, and \emph{Look-Back}, which smoothes latents using an exponential moving average with decay. We demonstrate through extensive experiments and comprehensive evaluation metrics that the proposed training-free trajectory smoothing models substantially outperform various state-of-the-art models across multiple datasets including COCO17, CUB-200, and Flickr30K.
Problem

Research questions and friction points this paper is trying to address.

flow matching
diffusion models
trajectory smoothing
training-free generation
image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

trajectory smoothing
training-free generation
flow matching
latent trajectory
diffusion models