🤖 AI Summary
Existing image editing methods struggle to achieve precise spatial manipulation of objects in realistic scenes due to the lack of explicit modeling of 3D geometry and perspective projection. This work proposes PhyEdit, a novel framework that, for the first time, integrates plug-and-play 3D geometric priors into the editing pipeline. By leveraging geometric simulation to generate 3D contextual guidance and combining 2D–3D joint supervision, depth map alignment, and perspective projection constraints, PhyEdit ensures physically consistent and spatially accurate edits. The study also contributes RealManip-10K, a real-world dataset, and ManipEval, a multidimensional evaluation benchmark. Experiments demonstrate that PhyEdit significantly outperforms current state-of-the-art methods—including several strong closed-source baselines—in both 3D geometric fidelity and manipulation consistency.
📝 Abstract
Achieving physically accurate object manipulation in image editing is essential for its potential applications in interactive world models. However, existing visual generative models often fail at precise spatial manipulation, resulting in incorrect scaling and positioning of objects. This limitation primarily stems from the lack of explicit mechanisms to incorporate 3D geometry and perspective projection. To achieve accurate manipulation, we develop PhyEdit, an image editing framework that leverages explicit geometric simulation as contextual 3D-aware visual guidance. By combining this plug-and-play 3D prior with joint 2D--3D supervision, our method effectively improves physical accuracy and manipulation consistency. To support this method and evaluate performance, we present a real-world dataset, RealManip-10K, for 3D-aware object manipulation featuring paired images and depth annotations. We also propose ManipEval, a benchmark with multi-dimensional metrics to evaluate 3D spatial control and geometric consistency. Extensive experiments show that our approach outperforms existing methods, including strong closed-source models, in both 3D geometric accuracy and manipulation consistency.