DiffBench Meets DiffAgent: End-to-End LLM-Driven Diffusion Acceleration Code Generation

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models suffer from high computational costs during inference and deployment challenges, compounded by the absence of a unified framework to automatically integrate diverse acceleration techniques. This work proposes DiffBench, a benchmark suite, and DiffAgent, an intelligent agent that uniquely combines large language models (LLMs) with genetic algorithms to establish a closed-loop automated workflow. This framework enables the generation, code synthesis, and iterative optimization of acceleration strategies tailored to any diffusion model. Leveraging a three-stage evaluation pipeline and an integrated code debugging mechanism, DiffAgent substantially outperforms existing LLM-based approaches, achieving consistently high performance across various hardware architectures and deployment scenarios.

Technology Category

Application Category

📝 Abstract
Diffusion models have achieved remarkable success in image and video generation. However, their inherently multiple step inference process imposes substantial computational overhead, hindering real-world deployment. Accelerating diffusion models is therefore essential, yet determining how to combine multiple model acceleration techniques remains a significant challenge. To address this issue, we introduce a framework driven by large language models (LLMs) for automated acceleration code generation and evaluation. First, we present DiffBench, a comprehensive benchmark that implements a three stage automated evaluation pipeline across diverse diffusion architectures, optimization combinations and deployment scenarios. Second, we propose DiffAgent, an agent that generates optimal acceleration strategies and codes for arbitrary diffusion models. DiffAgent employs a closed-loop workflow in which a planning component and a debugging component iteratively refine the output of a code generation component, while a genetic algorithm extracts performance feedback from the execution environment to guide subsequent code refinements. We provide a detailed explanation of the DiffBench construction and the design principles underlying DiffAgent. Extensive experiments show that DiffBench offers a thorough evaluation of generated codes and that DiffAgent significantly outperforms existing LLMs in producing effective diffusion acceleration strategies.
Problem

Research questions and friction points this paper is trying to address.

diffusion models
model acceleration
computational overhead
code generation
LLM-driven automation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion acceleration
Large Language Models (LLMs)
Automated code generation
Benchmarking
Genetic algorithm
🔎 Similar Papers
No similar papers found.