🤖 AI Summary
Diffusion Transformers (DiTs) achieve high generation quality but suffer from slow sampling. Existing training-free acceleration methods—such as step reduction, feature caching, and sparse attention—rely on uniform heuristic strategies, lacking image-level adaptivity and often compromising fidelity; dynamic fine-tuning alternatives incur prohibitive computational overhead. This paper proposes a three-level reinforcement learning framework for inference acceleration that operates without updating the frozen DiT backbone, enabling fine-grained, per-image adaptive optimization. We innovatively integrate grouped relative policy optimization with an adversarial reward mechanism to mitigate reward hacking and improve generalization. A lightweight policy head jointly controls step skipping, cache reuse, and sparse attention. Evaluated on Stable Diffusion 3 and FLUX, our method achieves 2.8× speedup while preserving fidelity—FID and CLIP-Score remain virtually unchanged versus the baseline.
📝 Abstract
Diffusion Transformers (DiTs) excel at visual generation yet remain hampered by slow sampling. Existing training-free accelerators - step reduction, feature caching, and sparse attention - enhance inference speed but typically rely on a uniform heuristic or a manually designed adaptive strategy for all images, leaving quality on the table. Alternatively, dynamic neural networks offer per-image adaptive acceleration, but their high fine-tuning costs limit broader applicability. To address these limitations, we introduce RAPID3: Tri-Level Reinforced Acceleration Policies for Diffusion Transformers, a framework that delivers image-wise acceleration with zero updates to the base generator. Specifically, three lightweight policy heads - Step-Skip, Cache-Reuse, and Sparse-Attention - observe the current denoising state and independently decide their corresponding speed-up at each timestep. All policy parameters are trained online via Group Relative Policy Optimization (GRPO) while the generator remains frozen. Meanwhile, an adversarially learned discriminator augments the reward signal, discouraging reward hacking by boosting returns only when generated samples stay close to the original model's distribution. Across state-of-the-art DiT backbones, including Stable Diffusion 3 and FLUX, RAPID3 achieves nearly 3x faster sampling with competitive generation quality.