LayerSync: Self-aligning Intermediate Layers

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low generation quality and poor training efficiency of diffusion models under settings without external supervision or additional data. To this end, we propose LayerSync—a domain-agnostic self-alignment training optimization method. LayerSync leverages semantically rich intermediate-layer representations within diffusion models as self-supervised signals and achieves lightweight self-regularization via dynamic inter-layer synchronization, introducing neither extra parameters nor computational overhead. Notably, LayerSync is the first method to generalize self-alignment mechanisms across multimodal diffusion models, including those for image, audio, video, and motion generation. On ImageNet, LayerSync accelerates streaming Transformer training by 8.75× while improving the Fréchet Inception Distance (FID) by 23.6%. Extensive experiments across diverse tasks validate its effectiveness and strong generalization capability.

Technology Category

Application Category

📝 Abstract
We propose LayerSync, a domain-agnostic approach for improving the generation quality and the training efficiency of diffusion models. Prior studies have highlighted the connection between the quality of generation and the representations learned by diffusion models, showing that external guidance on model intermediate representations accelerates training. We reconceptualize this paradigm by regularizing diffusion models with their own intermediate representations. Building on the observation that representation quality varies across diffusion model layers, we show that the most semantically rich representations can act as an intrinsic guidance for weaker ones, reducing the need for external supervision. Our approach, LayerSync, is a self-sufficient, plug-and-play regularizer term with no overhead on diffusion model training and generalizes beyond the visual domain to other modalities. LayerSync requires no pretrained models nor additional data. We extensively evaluate the method on image generation and demonstrate its applicability to other domains such as audio, video, and motion generation. We show that it consistently improves the generation quality and the training efficiency. For example, we speed up the training of flow-based transformer by over 8.75x on ImageNet dataset and improved the generation quality by 23.6%. The code is available at https://github.com/vita-epfl/LayerSync.
Problem

Research questions and friction points this paper is trying to address.

Improving diffusion model generation quality and training efficiency
Reducing need for external supervision through self-alignment
Generalizing approach across visual, audio, video and motion domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-aligning intermediate layers for intrinsic guidance
Plug-and-play regularizer with no training overhead
Domain-agnostic approach improving quality and efficiency
🔎 Similar Papers
No similar papers found.
Y
Yasaman Haghighi
Ecole Polytechnique Fédérale de Lausanne (EPFL)
B
Bastien van Delft
Ecole Polytechnique Fédérale de Lausanne (EPFL)
M
Mariam Hassan
Ecole Polytechnique Fédérale de Lausanne (EPFL)
Alexandre Alahi
Alexandre Alahi
Professor, EPFL
Computer VisionTransportationAutonomous drivingIntelligent Transportation SystemsAI