🤖 AI Summary
To address insufficient trajectory diversity in real-time video motion transfer, this paper proposes the GRU-Stochastic Normalizing Flow (GRU-SNF) framework. Unlike conventional GRU-Normalizing Flow (GRU-NF) models that employ deterministic transformations during inference, GRU-SNF is the first to integrate stochastic normalizing flows into the inference stage. It dynamically injects randomness via Markov Chain Monte Carlo (MCMC) sampling—without retraining—thereby significantly expanding the multimodal output space. The method preserves temporal coherence and prediction accuracy while enhancing long-horizon diversity and robustness. Evaluated on keypoint-based motion transfer tasks, GRU-SNF outperforms the original GRU-NF, demonstrating its effectiveness and superiority for generative time-series forecasting in low-bandwidth, highly interactive scenarios.
📝 Abstract
Real-time video motion transfer applications such as immersive gaming and vision-based anomaly detection require accurate yet diverse future predictions to support realistic synthesis and robust downstream decision making under uncertainty. To improve the diversity of such sequential forecasts we propose a novel inference-time refinement technique that combines Gated Recurrent Unit-Normalizing Flows (GRU-NF) with stochastic sampling methods. While GRU-NF can capture multimodal distributions through its integration of normalizing flows within a temporal forecasting framework, its deterministic transformation structure can limit expressivity. To address this, inspired by Stochastic Normalizing Flows (SNF), we introduce Markov Chain Monte Carlo (MCMC) steps during GRU-NF inference, enabling the model to explore a richer output space and better approximate the true data distribution without retraining. We validate our approach in a keypoint-based video motion transfer pipeline, where capturing temporally coherent and perceptually diverse future trajectories is essential for realistic samples and low bandwidth communication. Experiments show that our inference framework, Gated Recurrent Unit- Stochastic Normalizing Flows (GRU-SNF) outperforms GRU-NF in generating diverse outputs without sacrificing accuracy, even under longer prediction horizons. By injecting stochasticity during inference, our approach captures multimodal behavior more effectively. These results highlight the potential of integrating stochastic dynamics with flow-based sequence models for generative time series forecasting.