LiON-LoRA: Rethinking LoRA Fusion to Unify Controllable Spatial and Temporal Generation for Video Diffusion

πŸ“… 2025-07-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Video diffusion models (VDMs) struggle to precisely control camera trajectories and object motion under data-limited regimes, primarily due to the nonlinearity, instability, and spatiotemporal coupling inherent in conventional LoRA fusion mechanisms. To address this, we propose LiON-LoRAβ€”a novel LoRA framework that, for the first time, jointly enforces linearity, orthogonality, and norm consistency in adapter design, thereby enabling decoupled and linear modeling of spatial and temporal motion. Our method introduces controllable tokens for fine-grained motion intensity modulation and integrates an enhanced self-attention mechanism within the DiT architecture, allowing high-fidelity, temporally consistent video generation from static-camera inputs alone. Experiments demonstrate significant improvements over state-of-the-art methods in trajectory control accuracy and motion intensity controllability, while achieving strong generalization with only minimal training data.

Technology Category

Application Category

πŸ“ Abstract
Video Diffusion Models (VDMs) have demonstrated remarkable capabilities in synthesizing realistic videos by learning from large-scale data. Although vanilla Low-Rank Adaptation (LoRA) can learn specific spatial or temporal movement to driven VDMs with constrained data, achieving precise control over both camera trajectories and object motion remains challenging due to the unstable fusion and non-linear scalability. To address these issues, we propose LiON-LoRA, a novel framework that rethinks LoRA fusion through three core principles: Linear scalability, Orthogonality, and Norm consistency. First, we analyze the orthogonality of LoRA features in shallow VDM layers, enabling decoupled low-level controllability. Second, norm consistency is enforced across layers to stabilize fusion during complex camera motion combinations. Third, a controllable token is integrated into the diffusion transformer (DiT) to linearly adjust motion amplitudes for both cameras and objects with a modified self-attention mechanism to ensure decoupled control. Additionally, we extend LiON-LoRA to temporal generation by leveraging static-camera videos, unifying spatial and temporal controllability. Experiments demonstrate that LiON-LoRA outperforms state-of-the-art methods in trajectory control accuracy and motion strength adjustment, achieving superior generalization with minimal training data. Project Page: https://fuchengsu.github.io/lionlora.github.io/
Problem

Research questions and friction points this paper is trying to address.

Achieving precise control over camera trajectories and object motion in VDMs
Addressing unstable fusion and non-linear scalability in LoRA fusion
Unifying spatial and temporal controllability in video generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear scalability for motion amplitude control
Orthogonality for decoupled low-level controllability
Norm consistency to stabilize fusion
πŸ”Ž Similar Papers