🤖 AI Summary
Real-world remote photoplethysmography (rPPG) signals are often distorted by camera noise, motion blur, and defocus. To address this, we propose a novel codebook-driven rPPG restoration paradigm that formulates signal reconstruction as a latent feature retrieval task within a clean PPG codebook space. Our method employs a spatially aware encoder to extract robust spatiotemporal features, incorporates a physiological signal distillation loss to suppress aperiodic visual artifacts, and establishes an end-to-end trainable codebook retrieval and reconstruction framework. Crucially, we are the first to decouple rPPG restoration into a noise-robust feature matching process. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods across four benchmark datasets, exhibits strong cross-dataset generalization, and maintains high-fidelity signal recovery even under challenging conditions such as severe motion blur and defocus.
📝 Abstract
Remote photoplethysmography (rPPG) aims to measure non-contact physiological signals from facial videos, which has shown great potential in many applications. Most existing methods directly extract video-based rPPG features by designing neural networks for heart rate estimation. Although they can achieve acceptable results, the recovery of rPPG signal faces intractable challenges when interference from real-world scenarios takes place on facial video. Specifically, facial videos are inevitably affected by non-physiological factors (e.g., camera device noise, defocus, and motion blur), leading to the distortion of extracted rPPG signals. Recent rPPG extraction methods are easily affected by interference and degradation, resulting in noisy rPPG signals. In this paper, we propose a novel method named CodePhys, which innovatively treats rPPG measurement as a code query task in a noise-free proxy space (i.e., codebook) constructed by ground-truth PPG signals. We consider noisy rPPG features as queries and generate high-fidelity rPPG features by matching them with noise-free PPG features from the codebook. Our approach also incorporates a spatial-aware encoder network with a spatial attention mechanism to highlight physiologically active areas and uses a distillation loss to reduce the influence of non-periodic visual interference. Experimental results on four benchmark datasets demonstrate that CodePhys outperforms state-of-the-art methods in both intra-dataset and cross-dataset settings.