🤖 AI Summary
Existing image post-processing methods struggle to achieve physically consistent, parameter-precisely controllable photographic enhancement due to ambiguous text prompts, tightly coupled model architectures, difficulties in multi-parameter coordination, and insensitivity to fine-tuning. This paper proposes the Camera-Aware Diffusion framework (CAP-Diff), which decouples camera parameters—exposure, white balance, and zoom—from semantic content. CAP-Diff employs parameter-embedding modulation, hierarchical cross-attention, and temporal gating to jointly model semantic intent and physical parameters. The method enables fine-grained, linearly responsive, and composable multi-parameter control without requiring separate modules per parameter. Evaluated on a 78K-image dataset, CAP-Diff demonstrates high parameter-response linearity and natural composability, significantly outperforming existing controllable image generation approaches in both physical fidelity and parametric controllability.
📝 Abstract
Text-guided diffusion models have greatly advanced image editing and generation. However, achieving physically consistent image retouching with precise parameter control (e.g., exposure, white balance, zoom) remains challenging. Existing methods either rely solely on ambiguous and entangled text prompts, which hinders precise camera control, or train separate heads/weights for parameter adjustment, which compromises scalability, multi-parameter composition, and sensitivity to subtle variations. To address these limitations, we propose CameraMaster, a unified camera-aware framework for image retouching. The key idea is to explicitly decouple the camera directive and then coherently integrate two critical information streams: a directive representation that captures the photographer's intent, and a parameter embedding that encodes precise camera settings. CameraMaster first uses the camera parameter embedding to modulate both the camera directive and the content semantics. The modulated directive is then injected into the content features via cross-attention, yielding a strongly camera-sensitive semantic context. In addition, the directive and camera embeddings are injected as conditioning and gating signals into the time embedding, enabling unified, layer-wise modulation throughout the denoising process and enforcing tight semantic-parameter alignment. To train and evaluate CameraMaster, we construct a large-scale dataset of 78K image-prompt pairs annotated with camera parameters. Extensive experiments show that CameraMaster produces monotonic and near-linear responses to parameter variations, supports seamless multi-parameter composition, and significantly outperforms existing methods.