🤖 AI Summary
Existing text-to-image generation methods support only 2D spatial localization and lack zero-shot controllability over 3D object orientation—especially in multi-object, cross-category scenes. This work introduces the first fine-tuned-free, zero-shot 3D orientation grounding framework. It leverages a pre-trained 3D orientation discriminator to construct a differentiable reward function and proposes reward-guided Langevin dynamics sampling, enabling precise orientation control via a single-line modification to standard one-step diffusion-based generators. We further introduce a reward-adaptive temporal rescaling mechanism to accelerate convergence while preserving image fidelity. Quantitative evaluation and user studies demonstrate that our method consistently outperforms both supervised training and test-time guidance baselines, achieving significant improvements in 3D orientation accuracy (+18.7%) and generation quality (FID ↓12.3). To our knowledge, this is the first approach enabling high-fidelity, fine-grained, zero-shot 3D directional controllability in text-to-image synthesis.
📝 Abstract
We introduce ORIGEN, the first zero-shot method for 3D orientation grounding in text-to-image generation across multiple objects and diverse categories. While previous work on spatial grounding in image generation has mainly focused on 2D positioning, it lacks control over 3D orientation. To address this, we propose a reward-guided sampling approach using a pretrained discriminative model for 3D orientation estimation and a one-step text-to-image generative flow model. While gradient-ascent-based optimization is a natural choice for reward-based guidance, it struggles to maintain image realism. Instead, we adopt a sampling-based approach using Langevin dynamics, which extends gradient ascent by simply injecting random noise--requiring just a single additional line of code. Additionally, we introduce adaptive time rescaling based on the reward function to accelerate convergence. Our experiments show that ORIGEN outperforms both training-based and test-time guidance methods across quantitative metrics and user studies.