🤖 AI Summary
This work addresses the challenge of enforcing complex nonlinear constraints—such as road-legal regions in robotic control and autonomous driving—within generative models, where existing approaches often fail to simultaneously ensure constraint satisfaction and high-fidelity generation. The authors propose a constrained fine-tuning framework that leverages pre-trained generative models to produce outputs strictly confined within structured feasible regions, without compromising sample realism. By overcoming the limitations of conventional fine-tuning or training-free strategies, the method achieves superior performance across diverse and intricate constraint scenarios, consistently outperforming current baselines in both generation quality and adherence to constraints.
📝 Abstract
Constrained generative modeling is fundamental to applications such as robotic control and autonomous driving, where models must respect physical laws and safety-critical constraints. In real-world settings, these constraints rarely take the form of simple linear inequalities, but instead complex feasible regions that resemble road maps or other structured spatial domains. We propose a constrained generation framework that generates samples directly within such feasible regions while preserving realism. Our method fine-tunes a pretrained generative model to enforce constraints while maintaining generative fidelity. Experimentally, our method exhibits characteristics distinct from existing fine-tuning and training-free constrained baselines, revealing a new compromise between constraint satisfaction and sampling quality.