Accelerating Inference of Masked Image Generators via Reinforcement Learning

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Slow inference speed and excessive sampling steps hinder the practical deployment of Masked Generative Models (MGMs). To address this, this paper pioneers a reinforcement learning (RL) formulation of generative acceleration, proposing an end-to-end differentiable RL fine-tuning framework. Methodologically, we design a bi-objective reward function that jointly optimizes image fidelity—measured via CLIP score and FID—and sampling efficiency—penalizing step count—thereby directly refining the sampling policy of pretrained MGMs. Unlike conventional acceleration approaches (e.g., knowledge distillation), our method requires no auxiliary networks or architectural modifications. Experiments on ImageNet-1K demonstrate a ~67% reduction in sampling steps (3× speedup), while preserving FID and human evaluation scores statistically indistinguishable from the original model. This effectively breaks the long-standing quality–speed trade-off in masked image generation.

Technology Category

Application Category

📝 Abstract
Masked Generative Models (MGM)s demonstrate strong capabilities in generating high-fidelity images. However, they need many sampling steps to create high-quality generations, resulting in slow inference speed. In this work, we propose Speed-RL, a novel paradigm for accelerating a pretrained MGMs to generate high-quality images in fewer steps. Unlike conventional distillation methods which formulate the acceleration problem as a distribution matching problem, where a few-step student model is trained to match the distribution generated by a many-step teacher model, we consider this problem as a reinforcement learning problem. Since the goal of acceleration is to generate high quality images in fewer steps, we can combine a quality reward with a speed reward and finetune the base model using reinforcement learning with the combined reward as the optimization target. Through extensive experiments, we show that the proposed method was able to accelerate the base model by a factor of 3x while maintaining comparable image quality.
Problem

Research questions and friction points this paper is trying to address.

Accelerating masked image generators' inference speed
Reducing sampling steps while maintaining image quality
Applying reinforcement learning for model acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for accelerating image generation
Combined quality and speed reward optimization
Threefold inference speedup with maintained image quality
🔎 Similar Papers
No similar papers found.