MoCHA-former: Moiré-Conditioned Hybrid Adaptive Transformer for Video Demoiréing

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to model the strong dynamic Moiré patterns in screen-captured videos—caused by aliasing between camera CFA sampling and display subpixel frequencies—due to their spatial non-stationarity, large-scale structural complexity, inter-channel heterogeneity, and severe temporal fluctuations. This paper proposes a Hybrid Adaptive Transformer, the first to introduce Moiré-Conditioned Adaptive Mechanisms that decouple Moiré artifacts from scene content, enabling joint spatio-temporal-channel modeling in both RAW and sRGB domains. We design dedicated Moiré-Decoupling Blocks, Detail-Decoupling Blocks, and a Window/Channel Attention Fusion Module, achieving temporal consistency without explicit frame alignment. Evaluated on two video datasets, our method achieves state-of-the-art performance across PSNR, SSIM, and LPIPS metrics, with particularly robust restoration in complex-texture and high-motion scenes.

Technology Category

Application Category

📝 Abstract
Recent advances in portable imaging have made camera-based screen capture ubiquitous. Unfortunately, frequency aliasing between the camera's color filter array (CFA) and the display's sub-pixels induces moiré patterns that severely degrade captured photos and videos. Although various demoiréing models have been proposed to remove such moiré patterns, these approaches still suffer from several limitations: (i) spatially varying artifact strength within a frame, (ii) large-scale and globally spreading structures, (iii) channel-dependent statistics and (iv) rapid temporal fluctuations across frames. We address these issues with the Moiré Conditioned Hybrid Adaptive Transformer (MoCHA-former), which comprises two key components: Decoupled Moiré Adaptive Demoiréing (DMAD) and Spatio-Temporal Adaptive Demoiréing (STAD). DMAD separates moiré and content via a Moiré Decoupling Block (MDB) and a Detail Decoupling Block (DDB), then produces moiré-adaptive features using a Moiré Conditioning Block (MCB) for targeted restoration. STAD introduces a Spatial Fusion Block (SFB) with window attention to capture large-scale structures, and a Feature Channel Attention (FCA) to model channel dependence in RAW frames. To ensure temporal consistency, MoCHA-former performs implicit frame alignment without any explicit alignment module. We analyze moiré characteristics through qualitative and quantitative studies, and evaluate on two video datasets covering RAW and sRGB domains. MoCHA-former consistently surpasses prior methods across PSNR, SSIM, and LPIPS.
Problem

Research questions and friction points this paper is trying to address.

Removing moiré patterns from camera-captured screen videos
Addressing spatially varying moiré artifact strength issues
Handling temporal fluctuations and channel-dependent statistics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Moiré-conditioned hybrid adaptive transformer architecture
Decoupled moiré adaptive demoiréing with conditioning blocks
Spatio-temporal adaptive demoiréing without explicit alignment
🔎 Similar Papers
No similar papers found.