🤖 AI Summary
This paper addresses the challenge of enabling intuitive, view-dependent 3D deformation of non-photorealistic (NPR) models through 2D interactions. To this end, we propose a novel view-aware 3D editing framework that synthesizes view-conditioned deformation fields, supporting layer-like 2D deformation composition. Our method unifies 2D deformable mesh control over both Gaussian splatting and mesh-based representations for the first time, incorporating multi-view geometric consistency constraints and differentiable rendering optimization. Key technical components include 2D mesh-based editing, view-aware 3D deformation field interpolation, and spatial deformation of Gaussian splats. Experiments demonstrate effective editing of cartoon characters, hand-drawn portraits, occlusion repair, and classic NPR-style 3D models—achieving a favorable balance among editing intuitiveness, geometric fidelity, and view continuity.
📝 Abstract
We propose a method for authoring non-realistic 3D objects (represented as either 3D Gaussian Splats or meshes), that comply with 2D edits from specific viewpoints. Namely, given a 3D object, a user chooses different viewpoints and interactively deforms the object in the 2D image plane of each view. The method then produces a"deformation field"- an interpolation between those 2D deformations in a smooth manner as the viewpoint changes. Our core observation is that the 2D deformations do not need to be tied to an underlying object, nor share the same deformation space. We use this observation to devise a method for authoring view-dependent deformations, holding several technical contributions: first, a novel way to compositionality-blend between the 2D deformations after lifting them to 3D - this enables the user to"stack"the deformations similarly to layers in an editing software, each deformation operating on the results of the previous; second, a novel method to apply the 3D deformation to 3D Gaussian Splats; third, an approach to author the 2D deformations, by deforming a 2D mesh encapsulating a rendered image of the object. We show the versatility and efficacy of our method by adding cartoonish effects to objects, providing means to modify human characters, fitting 3D models to given 2D sketches and caricatures, resolving occlusions, and recreating classic non-realistic paintings as 3D models.