🤖 AI Summary
Current image editing methods prioritize semantic instruction execution while neglecting the concurrent removal of physically grounded artifacts—such as shadows, reflections, and mechanical interactions—that arise from object deletion, leading to physically implausible outputs. This work presents the first systematic evaluation of physical plausibility in image editing. We introduce PICABench, a comprehensive benchmark spanning eight dimensions across optics, mechanics, and state changes. To address the challenge, we propose PICA-100K, a video-based physics-aware editing model trained on 100K video clips to learn spatiotemporal physical priors. For rigorous assessment, we design PICAEval—a hybrid evaluation protocol integrating vision-language models (VLMs) with region-level human annotations to enable fine-grained physical consistency scoring. Experiments reveal severe physical inconsistencies in state-of-the-art models; PICA-100K significantly improves physical plausibility, establishing a new paradigm, benchmark, and data foundation for advancing image editing from “content accuracy” to “physical reasonableness.”
📝 Abstract
Image editing has achieved remarkable progress recently. Modern editing models could already follow complex instructions to manipulate the original content. However, beyond completing the editing instructions, the accompanying physical effects are the key to the generation realism. For example, removing an object should also remove its shadow, reflections, and interactions with nearby objects. Unfortunately, existing models and benchmarks mainly focus on instruction completion but overlook these physical effects. So, at this moment, how far are we from physically realistic image editing? To answer this, we introduce PICABench, which systematically evaluates physical realism across eight sub-dimension (spanning optics, mechanics, and state transitions) for most of the common editing operations (add, remove, attribute change, etc). We further propose the PICAEval, a reliable evaluation protocol that uses VLM-as-a-judge with per-case, region-level human annotations and questions. Beyond benchmarking, we also explore effective solutions by learning physics from videos and construct a training dataset PICA-100K. After evaluating most of the mainstream models, we observe that physical realism remains a challenging problem with large rooms to explore. We hope that our benchmark and proposed solutions can serve as a foundation for future work moving from naive content editing toward physically consistent realism.