HiddenObjects: Scalable Diffusion-Distilled Spatial Priors for Object Placement

📅 2026-04-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing object placement methods, which rely on manual annotations or artifact-prone inpainting pipelines, leading to poor scalability and susceptibility to shortcut learning. The authors propose a fully automatic and scalable evaluation framework that, for the first time, efficiently distills a generalizable class-conditional spatial prior from text-to-image diffusion models. They construct HiddenObjects, a large-scale dataset comprising 27 million annotations, and introduce a diffusion-based image inpainting strategy coupled with a dense object insertion evaluation protocol to compress this prior into a lightweight inference model. On downstream image editing tasks, the method substantially outperforms sparse human annotations (3.90 vs. 2.68 VLM-Judge) and achieves a 230,000× speedup at inference time, significantly surpassing current baselines and zero-shot vision-language models.

Technology Category

Application Category

📝 Abstract
We propose a method to learn explicit, class-conditioned spatial priors for object placement in natural scenes by distilling the implicit placement knowledge encoded in text-conditioned diffusion models. Prior work relies either on manually annotated data, which is inherently limited in scale, or on inpainting-based object-removal pipelines, whose artifacts promote shortcut learning. To address these limitations, we introduce a fully automated and scalable framework that evaluates dense object placements on high-quality real backgrounds using a diffusion-based inpainting pipeline. With this pipeline, we construct HiddenObjects, a large-scale dataset comprising 27M placement annotations, evaluated across 27k distinct scenes, with ranked bounding box insertions for different images and object categories. Experimental results show that our spatial priors outperform sparse human annotations on a downstream image editing task (3.90 vs. 2.68 VLM-Judge), and significantly surpass existing placement baselines and zero-shot Vision-Language Models for object placement. Furthermore, we distill these priors into a lightweight model for fast practical inference (230,000x faster).
Problem

Research questions and friction points this paper is trying to address.

object placement
spatial priors
diffusion models
scalable learning
natural scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion distillation
spatial priors
object placement
scalable dataset
image inpainting