SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer

📅 2025-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address high training resource consumption, the difficulty of balancing model scale and performance, and limited inference flexibility in text-to-image generation, this paper proposes SANA-1.5—a computationally efficient linear diffusion Transformer architecture. Methodologically, it introduces three key innovations: (1) a novel depth-growth training scaling strategy enabling adaptive computational expansion during training; (2) a modular pruning approach based on block-level importance analysis, substantially reducing model size; and (3) a repeated-sampling inference scaling mechanism that enhances generation quality under constrained budgets. Integrated with an 8-bit memory-efficient optimizer, SANA-1.5 achieves 0.72 on GenEval’s text–image alignment benchmark (baseline), improving to 0.80 after inference scaling—establishing a new state-of-the-art on this benchmark.

Technology Category

Application Category

📝 Abstract
This paper presents SANA-1.5, a linear Diffusion Transformer for efficient scaling in text-to-image generation. Building upon SANA-1.0, we introduce three key innovations: (1) Efficient Training Scaling: A depth-growth paradigm that enables scaling from 1.6B to 4.8B parameters with significantly reduced computational resources, combined with a memory-efficient 8-bit optimizer. (2) Model Depth Pruning: A block importance analysis technique for efficient model compression to arbitrary sizes with minimal quality loss. (3) Inference-time Scaling: A repeated sampling strategy that trades computation for model capacity, enabling smaller models to match larger model quality at inference time. Through these strategies, SANA-1.5 achieves a text-image alignment score of 0.72 on GenEval, which can be further improved to 0.80 through inference scaling, establishing a new SoTA on GenEval benchmark. These innovations enable efficient model scaling across different compute budgets while maintaining high quality, making high-quality image generation more accessible.
Problem

Research questions and friction points this paper is trying to address.

Text-to-Image Generation
Resource Efficiency
Image Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient Text-to-Image Generation
Linear Diffusion Transformer Optimization
Memory-Efficient Multi-Attempt Strategy
🔎 Similar Papers
No similar papers found.