🤖 AI Summary
Existing RAW reconstruction methods prioritize pixel-level fidelity, limiting adaptability to diverse rendering styles and post-processing workflows. To address this, we propose an editing-aware RAW reconstruction framework. First, we design a modular, differentiable ISP model that faithfully emulates the physical imaging pipeline. Second, we introduce a plug-and-play editing-robustness loss, computed in sRGB space, which explicitly incorporates editing stability into the RAW reconstruction objective for the first time. Third, we train the model via stochastic sampling of ISP parameter distributions, enabling metadata-driven fine-tuning. Our approach is compatible with mainstream RAW reconstruction architectures. Quantitative evaluation shows 1.5–2 dB PSNR improvement in sRGB reconstruction across multiple editing scenarios, significantly enhancing cross-domain reconstruction quality and post-capture editing flexibility.
📝 Abstract
Users frequently edit camera images post-capture to achieve their preferred photofinishing style. While editing in the RAW domain provides greater accuracy and flexibility, most edits are performed on the camera's display-referred output (e.g., 8-bit sRGB JPEG) since RAW images are rarely stored. Existing RAW reconstruction methods can recover RAW data from sRGB images, but these approaches are typically optimized for pixel-wise RAW reconstruction fidelity and tend to degrade under diverse rendering styles and editing operations. We introduce a plug-and-play, edit-aware loss function that can be integrated into any existing RAW reconstruction framework to make the recovered RAWs more robust to different rendering styles and edits. Our loss formulation incorporates a modular, differentiable image signal processor (ISP) that simulates realistic photofinishing pipelines with tunable parameters. During training, parameters for each ISP module are randomly sampled from carefully designed distributions that model practical variations in real camera processing. The loss is then computed in sRGB space between ground-truth and reconstructed RAWs rendered through this differentiable ISP. Incorporating our loss improves sRGB reconstruction quality by up to 1.5-2 dB PSNR across various editing conditions. Moreover, when applied to metadata-assisted RAW reconstruction methods, our approach enables fine-tuning for target edits, yielding further gains. Since photographic editing is the primary motivation for RAW reconstruction in consumer imaging, our simple yet effective loss function provides a general mechanism for enhancing edit fidelity and rendering flexibility across existing methods.