🤖 AI Summary
This work addresses visual artifacts—including color saturation, chromatic shifts in highlight/shadow regions, and texture leakage—in projector-based surface texturing, caused by physical luminance constraints and surface reflectance properties. We propose the Projection Surface Adaptation (PSA) framework, the first to integrate compensability modeling into text-driven projection image generation. PSA employs a dual-network co-simulation architecture (projector compensation + projection-capture), jointly optimized via content- and saturation-aware loss functions and text-encoder-guided diffusion generation—enabling end-to-end differentiable optimization without requiring real-world capture pretraining. Evaluated across diverse textured surfaces, PSA effectively suppresses texture interference while enhancing semantic consistency and color fidelity. User studies demonstrate a 62% reduction in perceived visual artifacts and a 5× speedup in generation latency compared to prior methods.
📝 Abstract
We propose LAPIG, a language guided projector image generation method with surface adaptation and stylization. LAPIG consists of a projector-camera system and a target textured projection surface. LAPIG takes the user text prompt as input and aims to transform the surface style using the projector. LAPIG's key challenge is that due to the projector's physical brightness limitation and the surface texture, the viewer's perceived projection may suffer from color saturation and artifacts in both dark and bright regions, such that even with the state-of-the-art projector compensation techniques, the viewer may see clear surface texture-related artifacts. Therefore, how to generate a projector image that follows the user's instruction while also displaying minimum surface artifacts is an open problem. To address this issue, we propose projection surface adaptation (PSA) that can generate compensable surface stylization. We first train two networks to simulate the projector compensation and project-and-capture processes, this allows us to find a satisfactory projector image without real project-and-capture and utilize gradient descent for fast convergence. Then, we design content and saturation losses to guide the projector image generation, such that the generated image shows no clearly perceivable artifacts when projected. Finally, the generated image is projected for visually pleasing surface style morphing effects. The source code and video are available on the project page: https://Yu-chen-Deng.github.io/LAPIG/.