UCMNet: Uncertainty-Aware Context Memory Network for Under-Display Camera Image Restoration

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of image restoration for under-display cameras, where light passing through the display panel induces spatially varying diffraction and scattering, resulting in non-uniform degradation and loss of high-frequency details. To tackle this, the authors propose UCMNet, a lightweight network that innovatively leverages an uncertainty map as a prior to drive adaptive restoration. Specifically, an uncertainty-aware mechanism dynamically guides the retrieval of region-specific features from a context memory bank, while an uncertainty-driven loss function jointly optimizes the uncertainty prior during training. This approach effectively models the spatially non-uniform degradation inherent in under-display imaging, achieving state-of-the-art performance across multiple benchmarks with 30% fewer parameters than existing methods.
📝 Abstract
Under-display cameras (UDCs) allow for full-screen designs by positioning the imaging sensor underneath the display. Nonetheless, light diffraction and scattering through the various display layers result in spatially varying and complex degradations, which significantly reduce high-frequency details. Current PSF-based physical modeling techniques and frequency-separation networks are effective at reconstructing low-frequency structures and maintaining overall color consistency. However, they still face challenges in recovering fine details when dealing with complex, spatially varying degradation. To solve this problem, we propose a lightweight \textbf{U}ncertainty-aware \textbf{C}ontext-\textbf{M}emory \textbf{Network} (\textbf{UCMNet}), for UDC image restoration. Unlike previous methods that apply uniform restoration, UCMNet performs uncertainty-aware adaptive processing to restore high-frequency details in regions with varying degradations. The estimated uncertainty maps, learned through an uncertainty-driven loss, quantify spatial uncertainty induced by diffraction and scattering, and guide the Memory Bank to retrieve region-adaptive context from the Context Bank. This process enables effective modeling of the non-uniform degradation characteristics inherent to UDC imaging. Leveraging this uncertainty as a prior, UCMNet achieves state-of-the-art performance on multiple benchmarks with 30\% fewer parameters than previous models. Project page: \href{https://kdhrick2222.github.io/projects/UCMNet/}{https://kdhrick2222.github.io/projects/UCMNet}.
Problem

Research questions and friction points this paper is trying to address.

Under-display camera
image restoration
spatially varying degradation
high-frequency details
diffraction and scattering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-aware
Context Memory
Under-Display Camera
Image Restoration
Spatially Varying Degradation
🔎 Similar Papers
No similar papers found.
D
Daehyun Kim
Hanyang University
Y
Youngmin Kim
Hanyang University, Agency for Defense Development (ADD)
Y
Yoon Ju Oh
Hanyang University
Tae Hyun Kim
Tae Hyun Kim
Dept. of Computer Science, Hanyang University
Computational ImagingComputer VisionMachine Learning