π€ AI Summary
This work addresses the lack of text controllability in image-to-3D generation models by proposing the first single-pass, text-driven editing framework. Methodologically: (1) it introduces plug-and-play text guidance into pre-trained image-to-3D models without iterative optimization; (2) it adopts a two-stage training paradigmβflow-matching-based cross-modal alignment followed by Direct Preference Optimization (DPO)-driven semantic fidelity fine-tuning; and (3) it designs a ControlNet-inspired controllable architecture coupled with an automated data engine, achieving high-quality cross-modal editing with only 100K samples. Experiments demonstrate that our method significantly outperforms prior approaches in both text adherence and preservation of original 3D geometry, while accelerating inference by 2.4β28.5Γ.
π Abstract
Recent progress in image-to-3D has opened up immense possibilities for design, AR/VR, and robotics. However, to use AI-generated 3D assets in real applications, a critical requirement is the capability to edit them easily. We present a feedforward method, Steer3D, to add text steerability to image-to-3D models, which enables editing of generated 3D assets with language. Our approach is inspired by ControlNet, which we adapt to image-to-3D generation to enable text steering directly in a forward pass. We build a scalable data engine for automatic data generation, and develop a two-stage training recipe based on flow-matching training and Direct Preference Optimization (DPO). Compared to competing methods, Steer3D more faithfully follows the language instruction and maintains better consistency with the original 3D asset, while being 2.4x to 28.5x faster. Steer3D demonstrates that it is possible to add a new modality (text) to steer the generation of pretrained image-to-3D generative models with 100k data. Project website: https://glab-caltech.github.io/steer3d/