π€ AI Summary
Current text-to-video diffusion models still suffer from insufficient alignment when conditioned on spatial control signals such as bounding boxes. This work proposes an end-to-end bounding box refinement approach that fine-tunes the input box coordinates through differentiable smoothed masks and an attention-maximization-based optimization objective, explicitly aligning them with the modelβs internal attention distributions. By making only minor adjustments to the bounding box positions, the method substantially improves both the visual quality and control fidelity of the generated videos. Experimental results and user studies demonstrate its effectiveness, marking the first successful realization of explicit co-optimization between bounding box conditioning and the attention mechanism in diffusion models.
π Abstract
With the recent drastic advancements in text-to-video diffusion models, controlling their generations has drawn interest. A popular way for control is through bounding boxes or layouts. However, enforcing adherence to these control inputs is still an open problem. In this work, we show that by slightly adjusting user-provided bounding boxes we can improve both the quality of generations and the adherence to the control inputs. This is achieved by simply optimizing the bounding boxes to better align with the internal attention maps of the video diffusion model while carefully balancing the focus on foreground and background. In a sense, we are modifying the bounding boxes to be at places where the model is familiar with. Surprisingly, we find that even with small modifications, the quality of generations can vary significantly. To do so, we propose a smooth mask to make the bounding box position differentiable and an attention-maximization objective that we use to alter the bounding boxes. We conduct thorough experiments, including a user study to validate the effectiveness of our method. Our code is made available on the project webpage to foster future research from the community.