BBQ-to-Image: Numeric Bounding Box and Qolor Control in Large-Scale Text-to-Image Models

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing text-to-image models struggle to precisely control object location, size, and color, limiting their utility in professional creative workflows. This work proposes BBQ, a novel approach that seamlessly integrates numerical bounding boxes and RGB triplets into large-scale text-to-image generation for the first time. By unifying spatial and color conditions within structured textual prompts, BBQ achieves fine-grained control without altering model architecture or inference strategies. Built upon a streaming Transformer and trained on an enhanced image-text dataset, the method supports intuitive user interactions such as drag-and-drop placement and color pickers. Experiments demonstrate that BBQ significantly outperforms existing methods in both bounding box alignment accuracy and color fidelity, establishing a new paradigm for high-precision controllable generation through structured language.

Technology Category

Application Category

📝 Abstract
Text-to-image models have rapidly advanced in realism and controllability, with recent approaches leveraging long, detailed captions to support fine-grained generation. However, a fundamental parametric gap remains: existing models rely on descriptive language, whereas professional workflows require precise numeric control over object location, size, and color. In this work, we introduce BBQ, a large-scale text-to-image model that directly conditions on numeric bounding boxes and RGB triplets within a unified structured-text framework. We obtain precise spatial and chromatic control by training on captions enriched with parametric annotations, without architectural modifications or inference-time optimization. This also enables intuitive user interfaces such as object dragging and color pickers, replacing ambiguous iterative prompting with precise, familiar controls. Across comprehensive evaluations, BBQ achieves strong box alignment and improves RGB color fidelity over state-of-the-art baselines. More broadly, our results support a new paradigm in which user intent is translated into an intermediate structured language, consumed by a flow-based transformer acting as a renderer and naturally accommodating numeric parameters.
Problem

Research questions and friction points this paper is trying to address.

numeric control
text-to-image generation
bounding box
color fidelity
structured language
Innovation

Methods, ideas, or system contributions that make the work stand out.

numeric bounding box
RGB color control
structured-text conditioning
text-to-image generation
precise spatial control
🔎 Similar Papers
E
Eliran Kachlon
BRIA AI
A
Alexander Visheratin
BRIA AI
N
Nimrod Sarid
BRIA AI
T
Tal Hacham
BRIA AI
E
Eyal Gutflaish
BRIA AI
Saar Huberman
Saar Huberman
Unknown affiliation
Computer Visiongeometric processingDeep Learning
H
Hezi Zisman
BRIA AI
D
David Ruppin
BRIA AI
Ron Mokady
Ron Mokady
Tel Aviv University
Computer VisionDeep Learning