DeLiVR: Differential Spatiotemporal Lie Bias for Efficient Video Deraining

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Outdoor videos suffer from rain streaks, motion blur, sensor noise, and inter-frame misalignment caused by camera micro-movements, leading to severe temporal artifacts. To address these challenges, this paper proposes an efficient video deraining method based on differential spatiotemporal Lie group modeling. The core contributions are: (1) a Lie-group-constrained relative bias mechanism enabling geometrically consistent feature alignment across frames; and (2) a Lie-group-guided attention bias integrating normalized coordinate transformation, temporal decay, and attention masking—embedded within the network to enhance robustness in temporal modeling. Unlike conventional optical flow–based or heuristic alignment approaches, our method avoids high computational overhead and poor generalization. Extensive experiments on multiple public benchmarks demonstrate significant improvements in deraining quality and inference speed, effectively suppressing inter-frame mismatch and temporal artifacts while achieving an optimal balance between accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Videos captured in the wild often suffer from rain streaks, blur, and noise. In addition, even slight changes in camera pose can amplify cross-frame mismatches and temporal artifacts. Existing methods rely on optical flow or heuristic alignment, which are computationally expensive and less robust. To address these challenges, Lie groups provide a principled way to represent continuous geometric transformations, making them well-suited for enforcing spatial and temporal consistency in video modeling. Building on this insight, we propose DeLiVR, an efficient video deraining method that injects spatiotemporal Lie-group differential biases directly into attention scores of the network. Specifically, the method introduces two complementary components. First, a rotation-bounded Lie relative bias predicts the in-plane angle of each frame using a compact prediction module, where normalized coordinates are rotated and compared with base coordinates to achieve geometry-consistent alignment before feature aggregation. Second, a differential group displacement computes angular differences between adjacent frames to estimate a velocity. This bias computation combines temporal decay and attention masks to focus on inter-frame relationships while precisely matching the direction of rain streaks. Extensive experimental results demonstrate the effectiveness of our method on publicly available benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Removing rain streaks and noise from videos efficiently
Addressing cross-frame mismatches and temporal artifacts
Replacing optical flow with robust geometric transformations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Injecting Lie-group biases into attention scores
Using rotation-bounded bias for geometry-consistent alignment
Applying differential displacement with temporal decay
🔎 Similar Papers
No similar papers found.
S
Shuning Sun
University of Chinese Academy of Sciences
J
Jialang Lu
Hubei University
X
Xiang Chen
Nanjing University of Science and Technology
J
Jichao Wang
University of Chinese Academy of Sciences
Dianjie Lu
Dianjie Lu
Shandong Normal University Professor
G
Guijuan Zhang
Shandong Normal University
Guangwei Gao
Guangwei Gao
Professor of PCALab@NJUST, IEEE/CCF/CSIG/CAAI/CAA Senior Member
Pattern RecognitionImage UnderstandingMachine Learning
Zhuoran Zheng
Zhuoran Zheng
‌Sun Yat-sen University
UHD image Medical image Label distribution learning